1 Introduction

In the last few years, the search for examples of matrix-valued orthogonal polynomials that are common eigenfunctions of a second-order differential operator, that is to say, satisfying a bispectral property in the sense of [13], has received a lot of attention after the seminal work of A. Durán in [15].

The theory of matrix-valued orthogonal polynomials was started by Krein in 1949 [37, 38] (see also [1, 2]), in connection with spectral analysis and moment problems. Nevertheless, the first examples of orthogonal matrix polynomials satisfying this extra property and non-reducible to scalar case appeared more recently in [19, 25, 27,28,29]. The collection of examples has been growing lately (see for instance [3, 4, 16, 17, 21, 22, 26, 34,35,36, 40,41,42]). Moreover, the problem of giving a general classification of these families of matrix-valued orthogonal polynomials as solutions of the so-called Matrix Bochner Problem has been also recently addressed in [7, 8] for the special case of \(2\times 2\) hypergeometric matrix differential operators.

As the case of classical orthogonal polynomials, the families of matrix-valued orthogonal polynomials satisfy many formal properties such as structural formulas (see for instance [3, 18, 20, 24, 34]), which have been very useful to compute explicitly the orthogonal polynomials related with several of these families. Having these explicit formulas is essential when one is looking for applications of these matrix-valued bispectral polynomials, such as in the problem of time and band limiting over a non-commutative ring and matrix-valued commuting operators, see [10,11,12, 30,31,32].

Recently, in [4], a new family of matrix-valued orthogonal polynomials of size \(2\times 2\) was introduced, which are common eigenfunctions of a differential operator of hypergeometric type (in the sense defined by Juan A. Tirao in [44]):

$$\begin{aligned} D= \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}t(1-t)+ \frac{\mathrm{d}}{\mathrm{d}t}\left( C-tU\right) -V, \quad \text { with } U,V,C\in {\mathbb {C}} ^{2\times 2}. \end{aligned}$$

In particular, the polynomials \((P^{\left( \alpha ,\beta ,v\right) }_n)_{n\ge 0}\) introduced in [4], orthogonal with respect to the weight matrix \(W^{(\alpha ,\beta ,v)}\) given in (2.4) and (2.5), are common eigenfunctions of an hypergeometric operator with matrix eigenvalues \(\Lambda _n\), which are diagonal matrices with no repetition in their entries. This fact could be especially useful if one intends to use this family of polynomials in the context of time and band limiting, where the commutativity of the matrix-valued eigenvalues \((\Lambda _n)_n\) could play an important role.

In this paper, we give some structural formulas for the family of matrix-valued orthogonal polynomials introduced in [4]. In particular, in Sect. 3, we give a Rodrigues formula (see Theorem 3.1), which allows us to write this family of polynomials explicitly in terms of the classical Jacobi polynomials (see Corollary 3.3).

In Sect. 4, this Rodrigues formula allows us to compute the norms of the sequence of monic orthogonal polynomials and therefore, we can find the coefficients of the three-term recurrence relation and the Christoffel–Darboux identity for the sequence of orthonormal polynomials.

In Sect. 5, we obtain a Pearson equation (see Proposition 5.4), which allows us to prove that the sequence of derivatives of k-th order, \(k\ge 1\), of the orthogonal polynomials is also orthogonal with respect to the weight matrix given explicitly in Proposition 5.3.

In Sect. 6, following the ideas in [34, Section 5.1], we use the Pearson equation to give explicit lowering and rising operators for the sequence of derivatives. Thus, we deduce a Rodrigues formula for these polynomials and find a matrix-valued differential operator that has these matrix-valued polynomials as common eigenfunctions.

Finally, in Sect. 7, we describe the algebra of second-order differential operators associated with the weight matrix \(W^{(\alpha ,\beta ,v)}\) given in (2.4) and (2.5). Indeed, for a given weight matrix W, the analysis of the algebra D(W) of all differential operators that have a sequence of matrix-valued orthogonal polynomials with respect to W as eigenfunctions has received much attention in the literature in the last fifteen years [6, 8, 9, 33, 42, 45, 47]. While for classical orthogonal polynomials, the structure of this algebra is very well-known (see [39]), in the matrix setting, where this algebra is non-commutative, the situation is highly non-trivial.

2 Preliminaries

In this section, we give some background on matrix-valued orthogonal polynomials (see [23] for further details). A weight matrix W is a complex \(N\times N\) matrix-valued integrable function on the interval (ab), such that W is positive definite almost everywhere and with finite moments of all orders, i.e., \( \int _a^b t^{n}\mathrm{d}W(t)\in {\mathbb {C}}^{N \times N}, \ n \in {\mathbb {N}}\). The weight matrix W induces a Hermitian sesquilinear form,

$$\begin{aligned} \left\langle P,Q\right\rangle _{W}=\int _{a}^{b}P(t)W\left( t\right) Q^{*}\left( t\right) \mathrm{d}t, \end{aligned}$$

for any pair of \(N\times N\) matrix-valued functions P(t) and Q(t), where \(Q^{*}(t)\) denotes the conjugate transpose of Q(t).

A sequence \((P_n)_{n\ge 0}\) of orthogonal polynomials with respect to a weight matrix W is a sequence of matrix-valued polynomials satisfying that \(P_n(t)\), \(n\ge 0\), is a matrix polynomial of degree n with non-singular leading coefficient, and \(\left\langle P_{n},P_{m}\right\rangle _{W}=\Delta _n\delta _{n,m}\), where \(\Delta _n\), \(n\ge 0\), is a positive definite matrix. When \(\Delta _n=I\), here I denotes the identity matrix, we say that the polynomials \((P_n)_{n\ge 0}\) are orthonormal. In particular, when the leading coefficient of \(P_n(t)\), \(n\ge 0\), is the identity matrix, we say that the polynomials \((P_n)_{n \ge 0}\) are monic.

Given a weight matrix W, there exists a unique sequence of monic orthogonal polynomials \(\left( P_{n}\right) _{n\ge 0}\) in \( {\mathbb {C}} ^{N\times N}[t]\), and any other sequence of orthogonal polynomials \(\left( Q_{n}\right) _{n\ge 0}\) can be written as \(Q_{n}(t)=K_{n}P_{n}(t)\) for some non-singular matrix \(K_{n}.\)

Any sequence of monic orthogonal matrix-valued polynomials \(\left( P_{n}\right) _{n\ge 0}\) satisfies a three-term recurrence relation

$$\begin{aligned} tP_{n}(t)=P_{n+1}(t)+B_{n}P_{n}(t)+A_{n}P_{n-1}(t),\quad \text { for }n\in {\mathbb {N}}_{0}, \end{aligned}$$

where \(P_{-1}(t)=0\), \(P_{0}(t)=I\). The \(N \times N \) matrix coefficients \(A_{n}\) and \(B_{n}\) enjoy certain properties; in particular, \(A_{n}\) is non-singular for any n.

Two weights W and \({\widetilde{W}}\) are said to be equivalent if there exists a non-singular matrix M, which does not depend on t, such that

$$\begin{aligned} {\widetilde{W}}(t)=MW(t)M^{*},\quad \text { for all }t\in (a,b). \end{aligned}$$

A weight matrix W reduces to a smaller size if there exists a non-singular matrix M such that

$$\begin{aligned} MW(t)M^{*}= \begin{pmatrix} W_{1}(t) &{} 0 \\ 0 &{} W_{2}(t) \end{pmatrix} ,\quad \text { for all }t\in (a,b), \end{aligned}$$

where \(W_{1}\) and \(W_{2}\) are weights of smaller size. A weight matrix W is said to be irreducible if it does not reduce to a smaller size (see [19, 46]).

Let D be a right-hand side ordinary differential operator with matrix-valued polynomial coefficients,

$$\begin{aligned} D=\sum _{i=0}^{s}\partial ^{i}F_{i}\left( t\right) ,\,\quad \partial ^{i}=\frac{\mathrm{d}^{i}}{\mathrm{d}t^{i}}. \end{aligned}$$

The operator D acts on a polynomial function \(P\left( t\right) \) as \( PD=\sum _{i=0}^{s}\partial ^{i}PF_{i}\left( t\right) .\)

We say that the differential operator D is symmetric with respect to W if

$$\begin{aligned} \left\langle PD,Q\right\rangle _{W}=\left\langle P,QD\right\rangle _{W},\ \text {for all}\ P,Q\in {\mathbb {C}} ^{N\times N}[t]. \end{aligned}$$
(2.1)

The differential operator \(D=\displaystyle \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}F_{2}(t)+\frac{\mathrm{d}}{\mathrm{d}t} F_{1}(t)+F_{0}\) is symmetric with respect to W if and only if [19, Theorem 3.1]

$$\begin{aligned} F_{2}W= & {} WF_{2}^{*} , \nonumber \\ 2\left( F_{2}W\right) ^{\prime }= & {} F_{1}W+WF_{1}^{*}, \nonumber \\ \left( F_{2}W\right) ^{\prime \prime }-\left( F_{1}W\right) ^{\prime }= & {} WF_{0}^{*}-F_{0}W , \end{aligned}$$
(2.2)

and

$$\begin{aligned} \lim _{t\rightarrow a,b} F_{2}\left( t\right) W\left( t\right) =0\quad \text { and }\quad \lim _{t\rightarrow a,b}\left( F_{1}\left( t\right) W\left( t\right) -W\left( t\right) F_{1}^{*}\left( t\right) \right) =0. \end{aligned}$$
(2.3)

We will need the following result to find the Rodrigues’ formula for the sequence of orthogonal polynomials with respect to a weight matrix W.

Theorem 2.1

[18, Lemma 1.1] Let \(F_{2}\), \(F_{1}\) and \(F_{0}\) be matrix polynomials of degrees not larger than 2, 1 , and 0, respectively. Let W, \(R_{n}\) be \(N\times N\) matrix functions twice and n times differentiable, respectively, in an open set of the real line \(\Omega \). Assume that W(t) is non-singular for \(t\in \) \(\Omega \) and that satisfies the identity and the differential equations in (2.2). Define the functions \(P_{n}\), \(n\ge 1\), by

$$\begin{aligned} P_{n}=R_{n}^{(n)}W^{-1}. \end{aligned}$$

If for a matrix \(\Lambda _{n}\), the function \(R_{n}\) satisfies

$$\begin{aligned} \left( R_{n}F_{2}^{*}\right) ^{\prime \prime }-\left( R_{n}[F_{1}^{*}+n\left( F_{2}^{*}\right) ^{\prime }]\right) ^{\prime }+R_{n}[F_{0}^{*}+n\left( F_{1}^{*}\right) ^{\prime }+ \begin{pmatrix} n \\ 2 \end{pmatrix} \left( F_{2}^{*}\right) ^{^{\prime \prime }}]=\Delta _{n}R_{n}, \end{aligned}$$

then the function \(P_{n}\) satisfies

$$\begin{aligned} P_{n}^{^{\prime \prime }}\left( t\right) F_{2}\left( t\right) + P_{n}^{^{\prime }}\left( t\right) F_{1}\left( t\right) + P_{n}\left( t\right) F_{0}\left( t\right) = \Delta _{n}P_{n}\left( t\right) . \end{aligned}$$

2.1 The Family of Matrix-Valued Orthogonal Polynomials

In [4], the authors introduce a Jacobi-type weight matrix \( W^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) and a differential operator \(D^{\left( \alpha ,\beta ,v\right) }\) such that \(D^{\left( \alpha ,\beta ,v\right) }\) is symmetric with respect to the weight matrix \( W^{\left( \alpha ,\beta ,v\right) }\left( t\right) .\ \)

Let \(\alpha \), \(\beta \), \(v\in {\mathbb {R}}\), \(\alpha ,\beta >-1\) and \(|\alpha -\beta |<|v|<\alpha +\beta +2\). We consider the weight matrix function

$$\begin{aligned} W^{\left( \alpha ,\beta ,v\right) }(t)=t^{\alpha }\left( 1-t\right) ^{\beta }\,{\widetilde{W}}^{\left( \alpha ,\beta ,v\right) }\left( t\right) , \quad \text { for }t\in (0,1), \end{aligned}$$
(2.4)

with

$$\begin{aligned}&\small {{\widetilde{W}}^{\left( \alpha ,\beta ,v\right) }\left( t\right) } \nonumber \\&\quad =\begin{pmatrix} \dfrac{v(\kappa _{v,\beta }+2)}{\kappa _{v,-\beta }}t^{2}-\left( \kappa _{v,\beta }+2 \right) t+(\alpha +1) &{} (\alpha +\beta +2)t-(\alpha +1) \\ (\alpha +\beta +2)t-(\alpha +1) &{} -\dfrac{v(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta } }t^{2}-\left( \kappa _{-v,\beta }+2 \right) t+(\alpha +1) \end{pmatrix},\nonumber \\ \end{aligned}$$
(2.5)

where for the sake of clearness in the rest of the paper, we use the notation:

$$\begin{aligned} \kappa _{\pm v,\pm \beta }=\alpha \pm v \pm \beta \ . \end{aligned}$$
(2.6)

\(W^{\left( \alpha ,\beta ,v\right) }\) is an irreducible weight matrix and the hypergeometric-type differential operator given by

$$\begin{aligned} D^{\left( \alpha ,\beta ,v\right) }= \displaystyle \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}F_{2}\left( t\right) +\frac{\mathrm{d}}{\mathrm{d}t}F_{1}\left( t\right) + F_{0}\left( t\right) , \end{aligned}$$
(2.7)

where

$$\begin{aligned} F_{2}\left( t\right) =t(1-t),\ F_{1}\left( t\right) =C^{*}-tU\ \text {and}\ \ F_{0}\left( t\right) =-V, \end{aligned}$$
(2.8)

and

$$\begin{aligned} C= \begin{pmatrix} \alpha +1-\dfrac{\kappa _{-v,-\beta }}{v} &{} \dfrac{\kappa _{v,-\beta }}{v} \\ -\dfrac{\kappa _{-v,-\beta }}{v} &{} \alpha +1+\dfrac{\kappa _{v,-\beta } }{v} \end{pmatrix} ,\ U=\left( \alpha +\beta +4\right) I \ \text { and }V= \begin{pmatrix} v &{} 0 \\ 0 &{} 0 \end{pmatrix}, \end{aligned}$$
(2.9)

is symmetric with respect to the weight matrix \(W^{\left( \alpha ,\beta ,v\right) }\).

In the same paper, the authors also give the corresponding monic orthogonal polynomials in terms of the hypergeometric function \(_{2}H_{1}\left( {C,U,V};t\right) \) defined by J. A. Tirao in [44] and their three-term recurrence relation.

Proposition 2.2

[4, Theorem 4.3] Let \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) be the sequence of matrix-valued monic orthogonal polynomials associated with the weight function \(W^{\left( \alpha ,\beta ,v\right) }(t)\). Then, \(P_{n}^{\left( \alpha ,\beta ,v\right) }\) is an eigenfunction of the differential operator \( D^{\left( \alpha ,\beta ,v\right) }\) with diagonal eigenvalue

$$\begin{aligned} \Lambda _{n}= \begin{pmatrix} \lambda _{n} &{} 0 \\ 0 &{} \mu _{n} \end{pmatrix} ,\quad \begin{array}{c} \lambda _{n}=-n(n-1)-n\left( \alpha +\beta +4\right) -v, \\ \mu _{n}=-n(n-1)-n\left( \alpha +\beta +4\right) . \end{array} \end{aligned}$$
(2.10)

Moreover, [4, Section 4.2], the matrix-valued monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) are given by

$$\begin{aligned} \left( P_{n}^{\left( \alpha ,\beta ,v\right) } \left( t \right) \right) ^{*} =&\,_{2}H_{1}\left( {C,U,V+\lambda _{n}I} ;t\right) n! \left[ C,U,V+\lambda _{n}I\right] _{n}^{-1} \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix} \nonumber \\&\quad +\,_{2}H_{1}\left( {C,U,V+\mu _{n}I};t\right) n!\left[ C,U,V+\mu _{n}I \right] _{n}^{-1} \begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix}, \end{aligned}$$
(2.11)

where

$$\begin{aligned} _{2}H_{1}\left( {C,U,V};t\right) =\sum \limits _{k\ge 0}\left[ C,U,V\right] _{k}\frac{t^{k}}{k!}, \end{aligned}$$

and \(\left[ C,U,V\right] _{k}\) is defined inductively as \(\left[ C,U,V \right] _{0}=I\) and

$$\begin{aligned} \left[ C,U,V\right] _{k+1}=\left( C+kI\right) ^{-1}\left( k\left( k-1\right) I+kU+V\right) \left[ C,U,V\right] _{k}. \end{aligned}$$
(2.12)

Proposition 2.3

[4, Theorem 3.12] The monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfy the three-term recurrence relation

$$\begin{aligned} tP_{n}^{\left( \alpha ,\beta ,v\right) }(t)=P_{n+1}^{\left( \alpha ,\beta ,v\right) }(t)+B_{n}^{\left( \alpha ,\beta ,v\right) }P_{n}^{\left( \alpha ,\beta ,v\right) }(t)+A_{n}^{\left( \alpha ,\beta ,v\right) }P_{n-1}^{\left( \alpha ,\beta ,v\right) } (t) \end{aligned}$$
(2.13)

where

$$\begin{aligned} A_{n}^{\left( \alpha ,\beta ,v\right) } =a_n^{\left( \alpha ,\beta ,v\right) }\begin{pmatrix} \left( 4+2n+\kappa _{-v,\beta }\right) \left( 2n+\kappa _{v,\beta }\right) &{} 0 \\ 0 &{} \left( 4+2n+ \kappa _{v,\beta }\right) \left( 2n+\kappa _{-v,\beta } \right) \end{pmatrix}, \end{aligned}$$
(2.14)

with

$$\begin{aligned}&\small {a_n^{\left( \alpha ,\beta ,v\right) }}\nonumber \\&\quad \small {= \frac{n(1+n+\alpha )(1+n+\beta )(2+n+\alpha +\beta )}{(1+2n+\alpha +\beta )(2+2n+\alpha +\beta )^{2}(3+2n+\alpha +\beta )(2+2n+\kappa _{-v,\beta } )(2+2n+\kappa _{v,\beta } )}, \ n \ge 0,} \end{aligned}$$
(2.15)

and the entries of \(B_n=B^{\left( \alpha ,\beta ,v\right) }_n\), \(n\ge 0\), are

$$\begin{aligned} \left( B_{n}\right) _{11}&=-n\frac{(\alpha +n)v-\kappa _{-v,-\beta }}{ (\alpha +\beta +2n+2)v}+(n+1)\frac{(\alpha +n+1)v-\kappa _{-v,-\beta }}{ (\alpha +\beta +2n+4)v}, \nonumber \\ \left( B_{n}\right) _{21}&={\frac{ \kappa _{v,-\beta } \left( \kappa _{-v,\beta }+2 \right) }{v \left( \kappa _{-v,\beta } +2 \,n+2 \right) \left( \kappa _{-v,\beta } +2\,n+4 \right) }},\nonumber \\ \left( B_{n}\right) _{12}&={\frac{ -\kappa _{-v,-\beta } \left( \kappa _{v,\beta }+2 \right) }{v \left( \kappa _{v,\beta }+2 \,n+2 \right) \left( \kappa _{v,\beta }+2\,n+4 \right) }} , \nonumber \\ \left( B_{n}\right) _{22}&=-n\frac{(\alpha +n)v+\kappa _{v,-\beta }}{ (\alpha +\beta +2n+2)v}+(n+1)\frac{(\alpha +n+1)v+\kappa _{v,-\beta }}{ (\alpha +\beta +2n+4)v} . \end{aligned}$$
(2.16)

Using the symmetry condition (2.1) and the three-term recurrence relation (2.13), one can easily see that the coefficients \(A_{n}^{\left( \alpha ,\beta ,v\right) }\) and \(B_{n}^{\left( \alpha ,\beta ,v\right) }\) satisfy the identities:

$$\begin{aligned} A_{n}^{\left( \alpha ,\beta ,v\right) }\left\| P_{n-1}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \right\| ^{2}= & {} \left\| P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \right\| ^{2}, \end{aligned}$$
(2.17)
$$\begin{aligned} \left( B_{n}^{\left( \alpha ,\beta ,v\right) }\left\| P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \right\| ^{2}\right) ^{*}= & {} B_{n}^{\left( \alpha ,\beta ,v\right) }\left\| P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \right\| ^{2} . \end{aligned}$$
(2.18)

3 Rodrigues Formula

In this section, we will provide a Rodrigues formula for the sequence of monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) with respect to the weight matrix \(W=W^{\left( \alpha ,\beta ,v\right) }\) in (2.4). Moreover, the Rodrigues formula will allow us to find an explicit expression for the polynomials in terms of Jacobi polynomials.

Theorem 3.1

Consider the weight matrix \(W(t)=W^{\left( \alpha ,\beta ,v\right) }(t)\) given by the expression in (2.4) and (2.5). Consider the matrix-valued functions \(\left( P_{n}\right) _{n\ge 0}\) and \(\left( R_{n}\right) _{n\ge 0}\) defined by

$$\begin{aligned} P_{n}\left( t\right)= & {} \left( R_{n}\left( t\right) \right) ^{\left( n\right) }\left( W\left( t\right) \right) ^{-1} , \end{aligned}$$
(3.1)
$$\begin{aligned} R_{n}(t)=R_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right)= & {} t^{n+\alpha }\left( 1-t\right) ^{n+\beta }\left( R_{n,2}^{\left( \alpha ,\beta ,v\right) }t^{2}+R_{n,1}^{\left( \alpha ,\beta ,v\right) }t+R_{n,0}^{\left( \alpha ,\beta ,v\right) }\right) , \end{aligned}$$
(3.2)

with

$$\begin{aligned} R_{n,2}^{\left( \alpha ,\beta ,v\right) }= & {} R_{n,2}= \begin{pmatrix} c_{n} &{} 0 \\ 0 &{} d_{n} \end{pmatrix},\\ R_{n,1}^{\left( \alpha ,\beta ,v\right) }= & {} R_{n,1}= \frac{1}{v}\begin{pmatrix} -c_{n}\kappa _{v,-\beta } &{} \dfrac{c_{n}(\alpha +2n+2+\beta )\kappa _{v,-\beta }}{\left( \kappa _{v,\beta } +2n+2\right) } \\ -\dfrac{d_{n}(\alpha +2n+2+\beta )\kappa _{-v,-\beta }}{\left( \kappa _{-v,\beta } +2n+2\right) } &{} d_{n}\kappa _{-v,-\beta } \end{pmatrix},\\ R_{n,0}^{\left( \alpha ,\beta ,v\right) }= & {} R_{n,0} =\frac{1+n+\alpha }{v} \begin{pmatrix} c_{n}\dfrac{\kappa _{v,-\beta }}{\left( \kappa _{v,\beta } +2n+2\right) } &{} -c_{n} \dfrac{\kappa _{v,-\beta }}{\left( \kappa _{v,\beta } +2n+2\right) } \\ d_{n}\dfrac{\kappa _{-v,-\beta }}{\left( \kappa _{-v,\beta } +2n+2\right) } &{} -d_{n} \dfrac{\kappa _{-v,-\beta }}{\left( \kappa _{-v,\beta } +2n+2\right) } \end{pmatrix} , \end{aligned}$$

where \((c_n)_n\) and \((d_n)_n\) are arbitrary sequences of complex numbers. Then, \(P_n(t)\) is a polynomial of degree n with non-singular leading coefficient equal to

$$\begin{aligned} \begin{pmatrix} \dfrac{\kappa _{v,-\beta } \left( \alpha +\beta +n+3\right) _{n}}{ \left( -1\right) ^{n}v\left( \kappa _{v,\beta } +2\right) }c_{n} &{} 0 \\ 0 &{} \dfrac{\kappa _{-v,-\beta } \left( \alpha +\beta +n+3\right) _{n}}{\left( -1\right) ^{n+1}v\left( \kappa _{-v,\beta }+2\right) }d_{n} \end{pmatrix}, \end{aligned}$$

where \((a)_n=a(a+1)\ldots (a+n-1)\) denotes the usual Pochhammer symbol. Moreover, if we put

$$\begin{aligned} c_{n}=\frac{\left( -1\right) ^{n}v\left( \kappa _{v,\beta }+2\right) }{\kappa _{v,-\beta } \left( \alpha +\beta +n+3\right) _{n}},\quad d_{n}=\frac{\left( -1\right) ^{n+1}v\left( \kappa _{-v,\beta }+2\right) }{ \kappa _{-v,-\beta } \left( \alpha +\beta +n+3\right) _{n}}, \end{aligned}$$
(3.3)

then \(\left( P_{n}\right) _{n\ge 0}\) is a sequence of monic orthogonal polynomials with respect to W and \(P_{n}=P_{n}^{\left( \alpha ,\beta ,v\right) }.\)

Proof

Let W be the weight matrix given in (2.4) and \(F_{2,}\) \( F_{1},\) \(F_{0}\) and \(\Lambda _{n}\) are the polynomials coefficients defined in (2.8)–(2.10).

Following straightforward computations, we can prove that the matrix-valued function \(R_{n}(t)\) satisfies the equation

$$\begin{aligned} \left( R_{n}F_{2}^{*}\right) ^{\prime \prime }-\left( R_{n}[F_{1}^{*}+n\left( F_{2}^{*}\right) ^{\prime }]\right) ^{\prime }+R_{n}[F_{0}^{*}+n\left( F_{1}^{*}\right) ^{\prime }+ \begin{pmatrix} n \\ 2 \end{pmatrix} \left( F_{2}^{*}\right) ^{^{\prime \prime }}]=\Lambda _{n}R_{n}. \end{aligned}$$

Theorem 2.1 guarantees that the function \(P_{n}\left( t\right) =\left( R_{n}\left( t\right) \right) ^{n} \left( W\left( t\right) \right) ^{-1} \) is an eigenfunction of \( D^{\left( \alpha ,\beta ,v\right) }\) with eigenvalue \(\Lambda _{n}\) given in (2.10). Then, \( P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) and \(P_{n}\left( t\right) \) satisfy the same differential equation.

We will prove that \(P_{n}\) is a polynomial of degree n with non-singular leading coefficient. We will use the following Rodrigues formula for the classical Jacobi polynomial \(p_{n}^{(\alpha ,\beta )}(t)\) [43, Chapter IV]

$$\begin{aligned} \frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\left[ t^{n+\alpha }\left( 1-t\right) ^{n+\beta }\right] =n!t^{\alpha }\left( 1-t\right) ^{\beta }p_{n}^{(\alpha ,\beta )}(1-2t), \end{aligned}$$

where

$$\begin{aligned} p_{n}^{(\alpha ,\beta )}(1-2t)=\frac{\Gamma \left( n+\alpha +1\right) }{n!\Gamma \left( n+\alpha +\beta +1\right) }\sum _{j=0}^{n} \begin{pmatrix} n \\ j \end{pmatrix} \frac{\Gamma \left( n+\alpha +\beta +1+j\right) }{\Gamma \left( j+\alpha +1\right) } \left( -1\right) ^{j}t^{j}. \end{aligned}$$
(3.4)

Thus, we obtain

$$\begin{aligned} R_{n}^{(n)}\left( t\right)= & {} n!t^{\alpha }\left( 1-t\right) ^{\beta }\left( p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}t^{2}\right. \\&\left. +p_{n}^{(\alpha +1,\beta )}(1-2t)R_{n,1}t+p_{n}^{(\alpha ,\beta )}(1-2t)R_{n,0}\right) . \end{aligned}$$

We can rewrite \(\left( W\left( t\right) \right) ^{-1}\) as

$$\begin{aligned} \left( W\left( t\right) \right) ^{-1} =t^{-\alpha -2}\left( 1-t\right) ^{-\beta -2}\left( J_{2}t^{2}+J_{1}t+J_{0}\right) , \end{aligned}$$

with

$$\begin{aligned} J_{2}= & {} \begin{pmatrix} \dfrac{\kappa _{v,-\beta }}{v(\kappa _{v,\beta } +2)} &{} 0 \\ 0 &{} -\dfrac{\kappa _{-v,-\beta } }{v(\kappa _{-v,\beta }+2)} \end{pmatrix}, \quad J_{0} =\dfrac{-\kappa _{v,-\beta } \kappa _{-v,-\beta } (\alpha +1)}{v^{2}(\kappa _{v,\beta }+2)(\kappa _{-v,\beta } +2)}\begin{pmatrix} 1 &{} 1 \\ 1 &{} 1 \end{pmatrix},\\ J_{1}= & {} \dfrac{\kappa _{v,-\beta } \kappa _{-v,-\beta }}{v^{2}} \begin{pmatrix} \dfrac{1}{(\kappa _{v,\beta } +2)} &{} \dfrac{(\alpha +\beta +2)}{(\kappa _{v,\beta } +2)(\kappa _{-v,\beta } +2)} \\ \dfrac{(\alpha +\beta +2)}{(\kappa _{v,\beta } +2)(\kappa _{-v,\beta }+2)} &{} \dfrac{1}{(\kappa _{-v,\beta }+2)} \end{pmatrix}. \end{aligned}$$

Observe that \(R_{n,0}J_{0}=0.\) Thus, \(P_{n}\left( t\right) \) becomes

$$\begin{aligned} P_{n}\left( t\right)= & {} n!t^{-1}\left( 1-t\right) ^{-2}\left[ p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}t\left( J_{2}t^{2}+J_{1}t+J_{0}\right) \right. \\&+p_{n}^{(\alpha +1,\beta )}(1-2t)R_{n,1}\left( J_{2}t^{2}+J_{1}t+J_{0}\right) +p_{n}^{(\alpha ,\beta )}(1-2t)R_{n,0}\left( J_{2}t+J_{1}\right) \left. \right] . \end{aligned}$$

Hence, \(P_{n}\left( t\right) \) is a polynomial of degree n if and only if \(t=0\) and \(t=1\) are zeros of the following polynomial

$$\begin{aligned} Q\left( t\right)= & {} p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}t\left( J_{2}t^{2}+J_{1}t+J_{0}\right) \\&+\,p_{n}^{(\alpha +1,\beta )}(1-2t)R_{n,1}\left( J_{2}t^{2}+J_{1}t+J_{0}\right) \\&+p_{n}^{(\alpha ,\beta )}(1-2t)R_{n,0}\left( J_{2}t+J_{1}\right) \end{aligned}$$

and \(t=1\) has multiplicity two, i.e., \(Q\left( 0\right) =Q\left( 1\right) =Q^{\prime }\left( 1\right) =0\).

Taking into account that \(p_{n}^{(\alpha ,\beta )}(1)=\dfrac{\Gamma \left( n+\alpha +1\right) }{n!\Gamma \left( \alpha +1\right) }\) and \(p_{n}^{(\alpha ,\beta )}(-1)=(-1)^{n}\dfrac{\Gamma \left( n+\beta +1\right) }{n!\Gamma \left( \beta +1\right) }\), we have

$$\begin{aligned} Q\left( 0\right)= & {} p_{n}^{(\alpha +1,\beta )}(1)R_{n,1}J_{0}+p_{n}^{(\alpha ,\beta )}(1)R_{n,0}J_{1}\\= & {} \frac{\Gamma \left( n+\alpha +1\right) }{n!\Gamma \left( \alpha +1\right) }\left( \frac{n+\alpha +1}{\alpha +1} R_{n,1}J_{0}+R_{n,0}J_{1}\right) ={\mathbf {0}}, \\ Q\left( 1\right)= & {} \left( -1\right) ^{n}\frac{\Gamma \left( n+\beta +1\right) }{n!\Gamma \left( \beta +1\right) }(R_{n,2}+R_{n,1}+R_{n,0})\left( J_{2}+J_{1}+J_{0}\right) ={\mathbf {0}}. \end{aligned}$$

Now, by taking derivative of \(Q\left( t\right) \) with respect to t and considering the identity

$$\begin{aligned} \left( p_{n}^{(\alpha ,\beta )}\right) ^{\prime }\left( -1\right) = \displaystyle \frac{\left( \beta +\alpha +n+1\right) \left( -1\right) ^{n-1}\Gamma \left( n+\beta +1\right) }{2\left( n-1\right) !\Gamma \left( \beta +2\right) , } \end{aligned}$$

we obtain

$$\begin{aligned} Q^{\prime }\left( 1\right)= & {} -2\left( \left( p_{n}^{(\alpha +2,\beta )}\right) ^{\prime }(-1)R_{n,2} + \left( p_{n}^{(\alpha +1,\beta )}\right) ^{\prime }(-1)R_{n,1}\right. \\&\left. + \left( p_{n}^{(\alpha ,\beta )}\right) ^{\prime }(-1)R_{n,0} \right) \left( J_{2}+J_{1}+J_{0}\right) \\&+p_{n}^{(\alpha +2,\beta )}(-1)R_{n,2}\left( 3J_{2}+2J_{1}+J_{0}\right) \\&+p_{n}^{(\alpha +1,\beta )}(-1)R_{n,1}\left( 2J_{2}+J_{1}\right) +p_{n}^{(\alpha ,\beta )}(-1)R_{n,0}J_{2}=0. \end{aligned}$$

This shows that \(Q\left( t\right) \) is divisible by \(t\left( t-1\right) ^{2}\;\) therefore, \(P_{n}\left( t\right) \) is a polynomial of degree n since \(\deg \left( Q(t)\right) =n+3\).

Observe that the leading coefficient of \(P_{n}\left( t\right) \) is determined by the leading coefficient of

\(n!p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}J_{2}t^{4}\). Considering (3.4), we have

$$\begin{aligned}&\frac{\left( -1\right) ^{n}\Gamma \left( 2n+\alpha +\beta +3\right) }{\Gamma \left( n+\alpha +\beta +3\right) }R_{n,2}J_{2}\\&\quad = \dfrac{\left( -1\right) ^{n}\left( \alpha +\beta +n+3\right) _{n}}{v }\begin{pmatrix} \dfrac{ \kappa _{v,-\beta }}{\kappa _{v,\beta }+2 }c_{n} &{} 0 \\ 0 &{} -\dfrac{ \kappa _{-v,-\beta }}{ \kappa _{-v,\beta }+2 }d_{n} \end{pmatrix}. \end{aligned}$$

The previous matrix coefficient is non-singular since \(|\alpha -\beta |<|v|<\alpha +\beta +2.\)

Moreover, if (3.3) holds true, then \(P_{n}\left( t\right) \) is a monic polynomial and equal to \( P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t \right) \). \(\square \)

Corollary 3.2

Consider the weight matrix \(W^{(\alpha ,\beta ,v)}(t)\) given in (2.4) and (2.5). Then, the monic orthogonal polynomials \(P_{n}^{\left( \alpha ,\beta ,v\right) }(t)\) satisfy the Rodrigues formula

$$\begin{aligned} {P_{n}^{(\alpha ,\beta ,v)}}(t)= (R_{n}^{(\alpha ,\beta ,v)}(t))^{(n)}\left( W^{(\alpha ,\beta ,v)}(t)\right) ^{-1}. \end{aligned}$$

We can see in the proof of Theorem 3.1 that Rodrigues’ formula allows us to find an explicit expression for the polynomials in terms of the classical Jacobi polynomials.

Corollary 3.3

Consider the matrix-valued function \({\widetilde{W}}^{(\alpha ,\beta ,v)}(t)\) given in (2.5) and let \(R_{n,i}^{(\alpha ,\beta ,v)}\), \(i=0,1,2\), be as in Theorem 3.1. Define the coefficients \(c_n\) and \(d_n\) as in (3.3). Then, the sequence of monic orthogonal polynomials \(\left( P_{n}\right) _{n\ge 0},\) defined by (3.1) can be written as

$$\begin{aligned} P_{n}\left( t\right)= & {} n!\left( p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}^{\left( \alpha ,\beta ,v\right) }t^{2}+p_{n}^{(\alpha +1,\beta )}(1-2t)R_{n,1}^{\left( \alpha ,\beta ,v\right) }t\right. \nonumber \\&\left. +p_{n}^{(\alpha ,\beta )}(1-2t)R_{n,0}^{\left( \alpha ,\beta ,v\right) }\right) ({\widetilde{W}}^{(\alpha ,\beta ,v)} \left( t\right) )^{-1} . \end{aligned}$$
(3.5)

Moreover,

$$\begin{aligned} P_{n}\left( t\right)= & {} n!\left( p_{n}^{(\alpha ,\beta )}(1-2t){\mathscr {C}}_{n,2}^{(\alpha ,\beta ,v)}+p_{n+1}^{(\alpha ,\beta )}(1-2t){\mathscr {C}}_{n,1}^{(\alpha ,\beta ,v)}\right. \nonumber \\&\left. +p_{n+2}^{(\alpha ,\beta )}(1-2t){\mathscr {C}}_{n,0}^{(\alpha ,\beta ,v)}\right) ({\widetilde{W}}^{(\alpha ,\beta ,v)}\left( t\right) )^{-1} , \end{aligned}$$
(3.6)

with

$$\begin{aligned}&{\mathscr {C}}_{n,2}^{(\alpha ,\beta ,v)}=\dfrac{\left( \beta +n+1\right) \left( \alpha +n+1\right) }{\left( \alpha +\beta +2n+2\right) \left( \alpha +\beta +2n+3\right) } \begin{pmatrix} \dfrac{c_{n}\left( \kappa _{-v,\beta } +2n+4\right) }{\kappa _{v,\beta }+2n+2} &{} 0\\ 0 &{} \dfrac{d_{n}\left( \kappa _{v,\beta } +2n+4\right) }{\kappa _{-v,\beta } +2n+2} \end{pmatrix}, \ \\&{\mathscr {C}}_{n,1}^{(\alpha ,\beta ,v)}=\frac{n+1}{v} \begin{pmatrix} \dfrac{\left( \alpha -\beta \right) \left( \kappa _{-v,\beta } +2n+4\right) c_{n} }{\left( \alpha +\beta +2n+2\right) \left( \alpha +\beta +2n+4\right) } &{} - \dfrac{c_{n} \kappa _{v,-\beta } }{\kappa _{v,\beta }+2n+2} \\ \dfrac{d_{n}\kappa _{-v,-\beta } }{\kappa _{-v,\beta }+2n+2} &{} -\dfrac{ \left( \alpha -\beta \right) \left( \kappa _{v,\beta }+2n+4\right) d_{n}}{ \left( \alpha +\beta +2n+2\right) \left( \alpha +\beta +2n+4\right) } \end{pmatrix},\\&{\mathscr {C}}_{n,0}^{(\alpha ,\beta ,v)}=\frac{\left( n+1\right) \left( n+2\right) }{\left( \alpha +\beta +2n+4\right) \left( \alpha +\beta +2n+3\right) } \begin{pmatrix} c_{n} &{} 0 \\ 0 &{} d_{n} \end{pmatrix}. \end{aligned}$$

Proof

The expression in (3.5) follows from the proof above and to obtain (3.6), we use the following property for the classical Jacobi polynomials \(p_{n}^{(\alpha ,\beta )}(t)\) [43, Section 4.5]:

$$\begin{aligned} p_{n}^{(\alpha +1,\beta )}(1-2t)=\frac{\left( n+\alpha +1\right) p_{n}^{(\alpha ,\beta )}(1-2t)-\left( n+1\right) p_{n+1}^{(\alpha ,\beta )}(1-2t)}{(2n+\alpha +\beta +2)t} \end{aligned}$$

in (3.5). \(\square \)

4 Orthonormal Polynomials

In this section, we give an explicit expression for the norm of the matrix-valued polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0} \). In addition, for the sequence of orthonormal polynomials, we show the three-term recurrence relation and the Christoffel–Darboux formula, introduced for a general sequence of matrix-valued orthogonal polynomials in [14] (see also [23]).

Proposition 4.1

The norm of the monic orthogonal polynomials \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \), \(n\ge 0\), is determined by

$$\begin{aligned} \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{2}= {\frac{n!vB\left( \alpha +n+2,\beta +n+2\right) }{\left( \alpha +n+3+\beta \right) _{n}}} \begin{pmatrix} \dfrac{\left( \kappa _{v,\beta }+2\right) \left( \kappa _{-v,\beta } +2n+4\right) }{ \kappa _{v,-\beta } \left( \kappa _{v,\beta } +2n+2\right) } &{} 0 \\ 0 &{} -\dfrac{\left( \kappa _{-v,\beta }+2\right) \left( \kappa _{v,\beta }+2n+4\right) }{\kappa _{-v,-\beta } \left( \kappa _{-v,\beta }+2n+2\right) } \end{pmatrix}. \end{aligned}$$
(4.1)

Therefore, the sequence of polynomials

$$\begin{aligned} {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) =\left\| P_{n}^{\left( \alpha ,\beta ,v\right) }\right\| ^{-1}P_{n}^{\left( \alpha ,\beta ,v\right) }(t) \end{aligned}$$

is orthonormal with respect to W.

Proof

Let \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) =\sum _{k=0}^{n} {\mathcal {P}}_{n}^{k} t^{k};\) using Rodrigues’ formula, we have

$$\begin{aligned} \Vert P_{n}^{\left( \alpha ,\beta ,v\right) }\Vert ^{2}= & {} \int _{0}^{1} P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) W\left( t\right) \left( P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \right) ^{*} \mathrm{d}t\\ {}= & {} \sum _{k=0}^{n}\int _{0}^{1}\left( R_{n}\left( t\right) \right) ^{(n)}\left( {\mathcal {P}}_{n}^{k}\right) ^{*}t^{k}\mathrm{d}t. \end{aligned}$$

Integrating by parts n times, we have,

$$\begin{aligned} \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{2}= & {} \left( -1\right) ^{n}\sum _{k=0}^{n}\int _{0}^{1}R_{n}\left( t\right) \frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}} \left( {\mathcal {P}}_{n}^{k}\right) ^{*}t^{k}\mathrm{d}t\\= & {} \left( -1\right) ^{n}\int _{0}^{1}R_{n}\left( t\right) \frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}t^{n}\mathrm{d}t \\= & {} \left( -1\right) ^{n}n!\int _{0}^{1}R_{n}\left( t\right) \mathrm{d}t\\= & {} \left( -1\right) ^{n}n!\int _{0}^{1}t^{n+\alpha }\left( 1-t\right) ^{n+\beta }\left( R_{n,2}t^{2}+R_{n,1}t+R_{n,0}\right) \mathrm{d}t \\= & {} \left( -1\right) ^{n}n!\left[ B\left( \alpha +n+3,\beta +n+1\right) R_{n,2}\right. \\&+B\left( \alpha +n+2,\beta +n+1\right) R_{n,1}\\&\left. +B\left( \alpha +n+1,\beta +n+1\right) R_{n,0}\right] , \end{aligned}$$

where \(B\left( x,y\right) =\int _{0}^{1}t^{x-1}\left( 1-t\right) ^{y-1} \mathrm{d}t\) is the Beta function. Using the following property,

$$\begin{aligned} B\left( x+1,y\right) =\frac{x}{x+y}B\left( x,y\right) \end{aligned}$$

we obtain,

$$\begin{aligned}&\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{2} =\left( -1\right) ^{n}n!B\left( \alpha +n+1,\beta +n+1\right) \\&\quad \left( \frac{\left( \alpha +n+1\right) \left( \alpha +n+2\right) }{\left( \alpha +\beta +2n+2\right) \left( \alpha +\beta +2n+3\right) }R_{n,2}+\frac{ \alpha +n+1}{\alpha +\beta +2n+2}R_{n,1}+R_{n,0}\right) . \end{aligned}$$

Using the expressions in (3.2), after some straightforward computations, we complete the proof. \(\square \)

The sequence of orthonormal polynomials satisfies the following properties.

Proposition 4.2

The orthonormal polynomials \(\left( {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfy the three-term recurrence relation

$$\begin{aligned} t{\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }(t)={\widetilde{A}}^{\left( \alpha ,\beta ,v\right) }_{n+1}{\widetilde{P}} _{n+1}^{\left( \alpha ,\beta ,v\right) }(t)+{\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}{\widetilde{P}} _{n}^{\left( \alpha ,\beta ,v\right) }(t)+\left( {\widetilde{A}}^{\left( \alpha ,\beta ,v\right) }_{n}\right) ^{*}{\widetilde{P}} _{n-1}^{\left( \alpha ,\beta ,v\right) }(t), \end{aligned}$$
(4.2)

with

$$\begin{aligned} {\widetilde{A}}^{\left( \alpha ,\beta ,v\right) }_{n+1}= & {} \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-1}\left\| P_{n+1}^{\left( \alpha ,\beta ,v\right) }\right\| , \\ {\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}= & {} \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-1} B^{\left( \alpha ,\beta ,v\right) }_{n}\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| , \end{aligned}$$

where \(B^{\left( \alpha ,\beta ,v\right) }_{n}\) is the coefficient of the three-term recurrence relation for the monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) (2.13). Clearly, \({\widetilde{B}}_{n}^{\left( \alpha ,\beta ,v\right) }\) is a symmetric matrix.

Proof

By replacing the identity \({\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) =\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-1}P_{n}^{\left( \alpha ,\beta ,v\right) }(t)\) in the three-term recurrence relation (2.13) and using identity (2.17), we obtain (4.2), and by (2.18) one verifies that

\(\left( {\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}\right) ^{*}={\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}\). \(\square \)

We also have the following Christoffel–Darboux formula for the sequence of orthonormal polynomials \(\left( {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\):

$$\begin{aligned} \sum _{k=0}^{n}\left( {\widetilde{P}}_{k}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) {\widetilde{P}}_{k}^{\left( \alpha ,\beta ,v\right) } \left( x\right)= & {} \frac{\left( {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) \left( {\widetilde{A}}_{n+1}^{\left( \alpha ,\beta ,v\right) }\right) ^{*} {\widetilde{P}}_{n+1}^{\left( \alpha ,\beta ,v\right) }\left( x\right) }{x-y}\\&-\frac{\left( {\widetilde{P}}_{n+1}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) {\widetilde{A}}^{\left( \alpha ,\beta ,v\right) }_{n+1} {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\left( x\right) }{x-y}. \end{aligned}$$

Hence, the sequence of monic polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfies

$$\begin{aligned}&\sum _{k=0}^{n}\left( P_{k}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) \left\| P_{k}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-2}P_{k}^{\left( \alpha ,\beta ,v\right) }\left( x\right) \\&\quad =\frac{\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-2} P_{n+1}^{\left( \alpha ,\beta ,v\right) }\left( x\right) }{x-y}\\&\qquad -\frac{\left( P_{n+1}^{\left( \alpha ,\beta ,v\right) }\right) ^{*}\left( y\right) \left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-2} P _{n}^{\left( \alpha ,\beta ,v\right) }\left( x\right) }{x-y}, \end{aligned}$$

where the explicit expression of \(\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-2}\) follows from (4.1).

5 The Derivatives of the Orthogonal Matrix-Valued Polynomials

In this section, we prove that polynomials in the sequence of derivatives of the orthogonal matrix polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n \ge 0}\) are also orthogonal by obtaining a Pearson equation for the weight matrix \(W^{\left( \alpha ,\beta ,v\right) }(t).\)

Let \(\displaystyle \frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) be the derivative of order k of the monic polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }(t) \), for \(n\ge k\). Then,

$$\begin{aligned} P_{ n}^{\left( \alpha ,\beta ,v,k \right) }\left( t\right) =\displaystyle \frac{\left( n-k\right) !}{n!}\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \end{aligned}$$
(5.1)

are monic polynomials of degree \(n-k\) for all \( n \ge k.\)

The polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) is an eigenfunction of the operator \(D^{\left( \alpha ,\beta ,v\right) }\) given above in (2.7)–(2.9).

Taking derivative k times, we have that \(P_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \) is an eigenfunction of the differential hypergeometric operator

$$\begin{aligned} D^{\left( k\right) }=D^{\left( \alpha ,\beta ,v,k\right) }=\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}t(1-t) +\frac{\mathrm{d}}{\mathrm{d}t}((C^{(k)})^{*}-tU^{(k)})-V , \end{aligned}$$
(5.2)

with

$$\begin{aligned} C^{(k)} =C+kI ,\qquad U^{(k)}=U+2kI=\left( \alpha +\beta +4+2k\right) \mathrm {\ I}, \end{aligned}$$

where C, U and V are the matrix entries of the operator \(D^{\left( \alpha ,\beta ,v\right) }\) given in (2.9). One has that

$$\begin{aligned} P_{n}^{\left( \alpha ,\beta ,v,k\right) }D^{\left( \alpha ,\beta ,v,k\right) }=\Lambda _{n}^{(k)}P_{n}^{\left( \alpha ,\beta ,v,k\right) },\quad n\ge k, \end{aligned}$$

where \(\Lambda _{n}^{(k)}=\Lambda _{n}+kU+k(k-1)I\), with \(\Lambda _{n}\) given in (2.10). One has, in particular, the standard expression for the eigenvalue shown in [4, Proposition 3.3], \(\Lambda _{n}^{(k)}=-(n-k)(n-k-1)I-(n-k)U^{(k)}-V\). More explicitly,

$$\begin{aligned} \Lambda _{n}^{(k)}= \begin{pmatrix} \lambda _{n}^{(k)} &{} 0 \\ 0 &{} \mu _{n}^{(k)} \end{pmatrix} ,\qquad \begin{array}{c} \lambda _{n}^{(k)}=-(n-k)\left( \alpha +\beta +3+n+k\right) -v, \\ \mu _{n}^{(k)}=-(n-k)\left( \alpha +\beta +3+n+k\right) . \end{array} \end{aligned}$$
(5.3)

Remark 5.1

One notices that \(D^{\left( \alpha ,\beta ,v,k\right) }=D^{\left( \alpha +k,\beta +k,v\right) }.\) Thus, the sequence of derivatives are still common eigenfunctions of an hypergeometric operator with diagonal matrix eigenvalues \(\Lambda ^{(k)}_n\), with no repetition in their entries.

Proposition 5.2

As in (2.11), we have the following explicit expression for the sequence of polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) in terms of hypergeometric functions

$$\begin{aligned} \left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \right) ^{*}&=\,_{2}H_{1}\left( { C^{(k)},U^{(k)},V+\lambda _{n}^{(k)}I} ;t\right) \left( n-k\right) !\nonumber \\&\quad \left[ C^{(k)},U^{(k)},V +\lambda _{n}^{(k)}I\right] _{n-k}^{-1} \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}\nonumber \\&\quad + \,_{2}H_{1}\left( { C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I} ;t\right) \left( n-k\right) !\nonumber \\&\quad \left[ C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I\right] _{n-k}^{-1} \begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix} , \end{aligned}$$
(5.4)

where \(C^{(k)}\), \(U^{(k)}\) and V are the entries of the differential operator in (5.2) and \(\lambda _{n}^{(k)}\) and \(\mu _n^{(k)}\) are the diagonal entries of the matrix eigenvalue \(\Lambda _n^{(k)}\) given in (5.3).

We include the proof for completeness.

Proof

Indeed, the polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) are common eigenfuntions of the matrix hypergeometric-type operator (5.2) with diagonal eigenvalue \(\Lambda _{n}^{(k)}\).

The fact that the eigenvalue is diagonal implies that the matrix equation can be written as two vectorial hypergeometric equations as in [44, Theorem 5], and the solutions of these equations are the columns of \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\). Since the eigenvalues of the matrices \(C^{(k)}\), \(3+\alpha +k\) and \(1+\alpha +k\) are nonnegative integers for all \(k\ge 1\), then these solutions are hypergeometric vector functions.

Moreover, the vectorial functions are polynomials of degree \(n-k\) since the form of the factor \(\left( (n-k)(n-k-1)I+(n-k)U^{(k)}+V+\lambda _{n}^{(k)}I\right) =-\Lambda _n^{(k)}+\lambda _{n}^{(k)}I\) appearing in the expression of \(\left[ C^{(k)},U^{(k)},V+\lambda _{n}^{(k)}I\right] _{n-k+1}\) (see 2.12), makes its first column equal to zero. Analogously for the second column of \(\left[ C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I\right] _{n-k+1}\).

The matrices \(\left[ C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I\right] _{n-k}\) and \(\left[ C^{(k)},U^{(k)},V+\lambda _{n}^{(k)}I\right] _{n-k}\) are non-singular, since \(\lambda _{q}^{(k)}\ne \mu _{\ell }^{(k)}\), \(\lambda _{q}^{(k)}\ne \lambda _{\ell }^{(k)}\) and \(\mu _{q}^{(k)}\ne \mu _{\ell }^{(k)}\) for all \(q\ne \ell \). \(\square \)

Proposition 5.3

Let \(\alpha ,\beta >-(k+1)\) and \(|\alpha -\beta |<|v|<\alpha +\beta +2\left( k+1\right) \). We write

$$\begin{aligned} W^{(k)}(t)= & {} W^{\left( \alpha ,\beta ,v,k\right) }(t)=t^{\alpha +k}\left( 1-t\right) ^{\beta +k} {\widetilde{W}}^{(\alpha , \beta , v, k)}\left( t\right) , \text {where}\nonumber \\ {\widetilde{W}}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right)= & {} W_{2}^{(k)}t^{2}+W_{1}^{(k)}t+W_{0}^{(k)}, \end{aligned}$$
(5.5)

with

$$\begin{aligned} W_{2}^{(k)}= & {} v\begin{pmatrix} \dfrac{\kappa _{v,\beta }+2\left( k+1\right) }{\kappa _{v,-\beta } } &{} 0 \\ 0 &{} -\dfrac{ \kappa _{-v,\beta }+2( k+1)}{\kappa _{-v,-\beta } } \end{pmatrix} ,\quad W_{0}^{(k)}=(\alpha +k+1) \begin{pmatrix} 1 &{} -1 \\ -1 &{} 1. \end{pmatrix}, \\ W_{1}^{(k)}= & {} \begin{pmatrix} - \kappa _{v,\beta } &{} \alpha +\beta \\ \alpha +\beta &{} - \kappa _{-v,\beta } \end{pmatrix}+2\left( k+1\right) \begin{pmatrix} 1 &{} 1 \\ 1&{} 1 \end{pmatrix}. \end{aligned}$$

Then, \(W^{(k)}\) is an irreducible weight matrix and the differential hypergeometric operator \(D^{\left( k\right) }\) in (5.2 ) is symmetric with respect to the weight matrix \(W^{(k)}\). Moreover, it holds that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\).

Proof

Taking into account Remark 5.1 and the fact that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\), from Proposition 4.1 in [4], one has that \(D^{\left( \alpha +k,\beta +k,v\right) }\) is symmetric with respect to \(W^{\left( \alpha +k,\beta +k,v\right) }\) and \(W^{\left( \alpha +k,\beta +k,v\right) }\) is an irreducible weight matrix if and only if \(\alpha +k\) and \(\beta +k\) satisfy \( \alpha +k>-1\), \(\beta +k>-1\) and \(|\left( \alpha +k\right) -\left( \beta +k\right) |<|v|<\left( \alpha +k\right) +\left( \beta +k\right) +2.\) \(\square \)

We will use the following Pearson equation to prove that the sequence of polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k} \) is orthogonal with respect to \( W^{(k)}\).

Theorem 5.4

The weight matrix \(W^{(k)}\) satisfies the following Pearson equation:

$$\begin{aligned} \left( W^{(k)}\left( t\right) \Phi ^{(k)}\left( t\right) \right) ^{\prime }= & {} W^{(k)}\left( t\right) \Psi ^{(k)}\left( t\right) ,\,k\in {\mathbb {N}}, \text { with }\nonumber \\ \Phi ^{(k)}\left( t\right)= & {} {\mathscr {A}}_{2}^{k}t^{2}+{\mathscr {A}}_{1}^{k}t+{\mathscr {A}}_{0}^{k}\text { and }\Psi ^{(k)}\left( t\right) ={\mathscr {B}}_{1}^{k}t+{\mathscr {B}}_{0}^{k}, \end{aligned}$$
(5.6)

where

$$\begin{aligned} {\mathscr {A}}_{2}^{k}&= \begin{pmatrix} -\dfrac{\kappa _{v,\beta } +2(k+2)}{\kappa _{v,\beta } +2(k+1)} &{} 0 \\ 0 &{} -\dfrac{\kappa _{-v,\beta } +2(k+2)}{\kappa _{-v,\beta }+2(k+1)} \end{pmatrix},\end{aligned}$$
(5.7)
$$\begin{aligned} {\mathscr {A}}_{1}^{k}&= \frac{2}{(\kappa _{-v,\beta }+2(k+1))(\kappa _{v,\beta }+2(k+1))}\begin{pmatrix} 0 &{} \kappa _{v,-\beta }\\ \kappa _{-v,-\beta } &{} 0 \end{pmatrix}-{\mathscr {A}}_{2}^{k}, \end{aligned}$$
(5.8)
$$\begin{aligned} {\mathscr {A}}_{0}^{k}&=\frac{\kappa _{v,-\beta }\kappa _{-v,-\beta }}{v(\kappa _{-v,\beta }+2(k+1))(\kappa _{v,\beta }+2(k+1))} \begin{pmatrix} -1 &{} 1 \\ -1 &{} 1 \end{pmatrix},\end{aligned}$$
(5.9)
$$\begin{aligned} {\mathscr {B}}_{1}^{k}&=(\alpha +\beta +4+2k){\mathscr {A}}_{2}^{k} , \end{aligned}$$
(5.10)
$$\begin{aligned} {\mathscr {B}}_{0}^{k}&= \left( -(\alpha +k+1)I-\dfrac{1}{v}\begin{pmatrix} -\kappa _{-v,-\beta }&{}0\\ 0&{}\kappa _{v,-\beta } \end{pmatrix} \right) {\mathscr {A}}_{2}^{k}\nonumber \\&\quad +\dfrac{1}{2v}\left( \dfrac{\alpha +\beta +2k+4}{v} {\mathscr {A}}_{1}^{k}+{\mathscr {B}}_{1}^{k}\right) \begin{pmatrix} -\kappa _{-v,\beta }-2(k+1)&{}0\\ 0&{}\kappa _{v,\beta }+2(k+1) \end{pmatrix}. \end{aligned}$$
(5.11)

Proof

By replacing the expression of \(\Phi ^{(k)}\left( t\right) \) and \(\Psi ^{(k)}\left( t\right) \) in (5.6) and taking derivative, we obtain

$$\begin{aligned} \left( W^{(k)}\left( t\right) \right) ^{\prime }\left( {\mathscr {A}}_{2}^{k}t^{2}+{\mathscr {A}}_{1}^{k}t+{\mathscr {A}}_{0}^{k}\right) -W^{(k)}\left( t\right) \left( \left( {\mathscr {B}}_{1}^{k}-2{\mathscr {A}}_{2}^{k}\right) t+{\mathscr {B}}_{0}^{k}-{\mathscr {A}}_{1}^{k}\right) =0. \end{aligned}$$
(5.12)

The derivative of \(W^{(k)}\left( t\right) \) is

$$\begin{aligned} \left( W^{(k)}\left( t\right) \right) ^{\prime }= & {} -t^{\alpha +k-1}\left( 1-t\right) ^{\beta +k-1} \left( \alpha +\beta +2k+2\right) W_{2}^{(k)}t^{3} + \left( \alpha +k\right) W_{0}^{(k)}\\&+\left[ \left( \alpha +k+2\right) W_{2}^{(k)}-\left( \alpha +\beta +2k+1\right) W_{1}^{(k)}\right] t^{2}\\&+ \left[ \left( \alpha +k+1\right) W_{1}^{(k)}-\left( \alpha +\beta +2k\right) W_{0}^{(k)}\right] t . \end{aligned}$$

Hence, the left-hand side of (5.12) is a product between a polynomial of degree five and

\(t^{\left( \alpha +k-1\right) }\left( 1-t\right) ^{\left( \beta +k-1\right) }.\) Therefore, equating to zero the entries of this polynomial, taking into account (5.10) and the equality \(W_{0}^{(k)} {\mathscr {A}}_{0}^{k}=0\), it only remains to verify the identities below, which follow immediately by straightforward computations.

$$\begin{aligned}&\left( \alpha +k+4\right) W_{2}^{(k)}{\mathscr {A}}_{2}^{k}-\left( \alpha +\beta +2k+3\right) \left( W_{2}^{(k)} {\mathscr {A}}_{1}^{k} +W_{1}^{(k)}{\mathscr {A}}_{2}^{k}\right) \\&\quad + W_{2}^{(k)}\left( {\mathscr {B}}_{0}^{k}-{\mathscr {B}}_{1}^{k}\right) +W_{1}^{(k)}{\mathscr {B}}_{1}^{k} =0,\\&\left( \alpha +k+3\right) \left( W_{2}^{(k)} {\mathscr {A}}_{1}^{k}+ W_{1}^{(k)}{\mathscr {A}}_{2}^{k}\right) -\left( \alpha +\beta +2k+2\right) \\&\qquad \left( W_{2}^{(k)}{\mathscr {A}}_{0}^{k}+W_{1}^{(k)}{\mathscr {A}}_{1}^{k}\right) -W_{2}^{(k)}{\mathscr {B}}_{0}^{k}\nonumber \\&\quad +W_{1}^{(k)}\left( {\mathscr {B}}_{0}^{k}-{\mathscr {B}}_{1}^{k}\right) +2W_{0}^{(k)} {\mathscr {A}}_{2}^{k} =0,\\&\left( \alpha +k+2\right) \left( W_{2}^{(k)}{\mathscr {A}}_{0}^{k}+W_{1}^{(k)} {\mathscr {A}}_{1}^{k}+W_{0}^{(k)} {\mathscr {A}}_{2}^{k}\right) \\&\quad -\left( \alpha +\beta +2k+1\right) \left( W_{1}^{(k)} {\mathscr {A}}_{0}^{k}+W_{0}^{(k)}{\mathscr {A}}_{1}^{k}\right) \\&\quad - W_{1}^{(k)}{\mathscr {B}}_{0}^{k}+ W_{0}^{(k)}\left( {\mathscr {B}}_{0}^{k}-{\mathscr {B}}_{1}^{k}\right) =0,\\&\left( \alpha +k+1\right) \left( W_{1}^{(k)}{\mathscr {A}}_{0}^{k}+W_{0}^{(k)} {\mathscr {A}}_{1}^{k}\right) -W_{0}^{(k)}{\mathscr {B}}_{0}^{k} =0. \end{aligned}$$

\(\square \)

Remark 5.5

Let us consider the matrix-valued functions \(W^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) =W^{(k)}(t)\), \(\Phi ^{(k)}(t)\) and \(\Psi ^{(k)}(t)\), \(k\in {\mathbb {N}}\), defined in (5.5) and Theorem 5.4, respectively. Then, by straightforward computations, one can verify the following identities:

$$\begin{aligned} W^{\left( \alpha ,\beta ,v,k+1\right) }\left( t\right)= & {} W^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \Phi ^{(k)}\left( t\right) , \end{aligned}$$
(5.13)
$$\begin{aligned} \left( W^{\left( \alpha ,\beta ,v,k+1\right) }\left( t\right) \right) ^{\prime }= & {} W^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \Psi ^{(k)}\left( t\right) . \end{aligned}$$
(5.14)

Taking into account that \(\deg \left( \Phi ^{(k)}(t)\right) =2\) and \(\deg \left( \Psi ^{(k)}(t)\right) =1\), we obtain from [5, Corollary 3.10] the following:

Corollary 5.6

The sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) is orthogonal with respect to the weight matrix \(W^{(k)} =W^{\left( \alpha ,\beta ,v,k-1\right) }\left( t\right) \Phi ^{(k-1)}\left( t\right) .\)

The following results are obtained in a similar way than in Theorem 3.1 and Corollary 3.3.

Proposition 5.7

Let \(W^{(k)}(t)\) be defined as in (5.5). A Rodrigues formula for the sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) is

$$\begin{aligned} P_{n}^{\left( \alpha ,\beta ,v,k\right) }(t)= & {} \left( R_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \right) ^{\left( n-k\right) }\left( W^{(k)}\left( t\right) \right) ^{-1} ,\ \text {where}\\ R_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right)= & {} R_{n-k}^{\left( \alpha +k,\beta +k,v\right) }\left( t\right) . \end{aligned}$$

Corollary 5.8

Let the matrix-valued function \(W^{(k)}(t),\) and the matrices \(R^{\left( \alpha ,\beta ,v\right) }_{n-k,2},R^{\left( \alpha ,\beta ,v\right) }_{n-k,1}\) and \(R^{\left( \alpha ,\beta ,v\right) }_{n-k,0}\) be defined as in (5.5) and (3.2). From the Rodrigues formula, we get the explicit expressions for the sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) in terms of the classical Jacobi polynomials \(p_{n}^{(\alpha ,\beta )}(t)\),

$$\begin{aligned} P_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right)= & {} (n-k)!\left( p_{n-k}^{(\alpha +2+k,\beta +k)}(1-2t)R_{n-k,2}^{\left( \alpha +k,\beta +k,v\right) }t^{2}\right. \\&+p_{n-k}^{(\alpha +1+k,\beta +k)}(1-2t)R_{n-k,1}^{\left( \alpha +k,\beta +k,v\right) }t\\&\left. +p_{n-k}^{(\alpha +k,\beta +k)}(1-2t)R_{n-k,0}^{\left( \alpha +k,\beta +k,v\right) }\right) \left( {\widetilde{W}}^{(k)}\right) ^{-1} \end{aligned}$$

and

$$\begin{aligned} P_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right)= & {} (n-k)!\left( p_{n-k}^{(\alpha +k,\beta +k)}(1-2t){\mathscr {C}}_{n-k,2}^{(\alpha +k,\beta +k,v)}\right. \\&+p_{n-k+1}^{(\alpha +k,\beta +k)}(1-2t){\mathscr {C}}_{n-k,1}^{(\alpha +k,\beta +k,v)}\\&\left. +p_{n-k+2}^{(\alpha +k,\beta +k)}(1-2t){\mathscr {C}}_{n-k,0}^{(\alpha +k,\beta +k,v)}\right) \left( \widetilde{W }^{(k)}\right) ^{-1}, \end{aligned}$$

with \({\mathscr {C}}_{n-k,i}^{\left( \alpha +k,\beta +k,v\right) },\)\(i=0,1,2\), given by (3.6).

Proposition 5.9

The orthogonal monic polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) satisfy the three-term recurrence relation

$$\begin{aligned} tP_{n}^{\left( \alpha ,\beta ,v,k\right) }(t)=P_{n+1}^{\left( \alpha ,\beta ,v,k\right) }(t)+B_{n}^{\left( \alpha ,\beta ,v,k\right) }P_{n}^{\left( \alpha ,\beta ,v,k\right) }(t) +A_{n}^{\left( \alpha ,\beta ,v,k\right) }P_{n-1}^{\left( \alpha ,\beta ,v,k\right) }(t) \end{aligned}$$

with

$$\begin{aligned} B_{n}^{\left( \alpha ,\beta ,v,k\right) }=B_{n-k}^{\left( \alpha +k,\beta +k,v\right) },\quad A_{n}^{\left( \alpha ,\beta ,v,k\right) }=A_{n-k}^{\left( \alpha +k,\beta +k,v\right) },\ n\ge k. \end{aligned}$$

The explicit expressions of \(B_{n}^{\left( \alpha ,\beta ,v\right) }\) and \(A_{n}^{\left( \alpha ,\beta ,v\right) }\) are given in (2.14)-(2.16).

Considering that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\) (see Proposition 5.3), the previous recurrence follows directly from (2.3). Notwithstanding, we include the following proof for completeness.

Proof

If we write \(P_{n}^{\left( \alpha ,\beta ,v,k\right) } (t)=\sum _{s=0} ^{n-k}{\mathcal {P}}_{n-k}^{s}t^{s}\), from (5.4), we have the following explicit expressions,

$$\begin{aligned} {\mathcal {P}}_{n-k}^{n-k-1}= & {} \dfrac{\left( n-k\right) }{v} \begin{pmatrix} -\dfrac{(\alpha +n)v-\kappa _{-v,-\beta }}{(\alpha +\beta +2n+2)} &{} \dfrac{\kappa _{-v,-\beta }}{(\kappa _{v,\beta } +2n+2)} \\ -\dfrac{\kappa _{v,-\beta }}{(\kappa _{-v,\beta } +2n+2)} &{} -\dfrac{(\alpha +n)v+\kappa _{-v,-\beta }}{(\alpha +\beta +2n+2)} \end{pmatrix}, \end{aligned}$$
(5.15)
$$\begin{aligned} {\mathcal {P}}_{n-k} ^{n-k-2}= & {} \dfrac{\left( n-k\right) \left( n-k-1\right) (\alpha +n+1)}{(\alpha +\beta +2n+2)} \left[ \dfrac{\alpha +n}{2(\alpha +\beta +2n+1)}\right. \nonumber \\&\begin{pmatrix} \dfrac{\kappa _{v,\beta }+2n}{\kappa _{v,\beta }+2n+2}&{}0 \\ 0 &{} \dfrac{\kappa _{-v,\beta }+2n}{\kappa _{-v,\beta }+2n+2} \end{pmatrix} \nonumber \\&+\dfrac{n+\beta +1}{\alpha +\beta +2n+1}\begin{pmatrix} \dfrac{1}{\kappa _{v,\beta }+2n+2}&{}0 \\ 0 &{} \dfrac{1}{\kappa _{-v,\beta }+2n+2} \end{pmatrix} \end{aligned}$$
(5.16)
$$\begin{aligned}&+\left. \dfrac{1}{v}\begin{pmatrix} -\dfrac{\alpha -\beta }{(\kappa _{v,\beta }+2n+2)}&{}-\dfrac{\kappa _{-v,-\beta }}{ \kappa _{v,\beta }+2n+2}\\ \dfrac{\kappa _{v,-\beta }}{ \kappa _{-v,\beta }+2n+2}&{} \dfrac{\alpha -\beta }{(\kappa _{-v,\beta }+2n+2)} \end{pmatrix}\right] . \end{aligned}$$
(5.17)

If we consider the coefficient of order \(n-k\) and \(n-k-1\) in the three-term recurrence relation, we have,

$$\begin{aligned} B_{n}^{\left( \alpha ,\beta ,v,k\right) }= & {} {\mathcal {P}}_{n-k} ^{n-k-1}-{\mathcal {P}}_{n+1-k} ^{n-k-1}, \\ A_{n}^{\left( \alpha ,\beta ,v,k\right) }= & {} {\mathcal {P}}_{n-k} ^{n-k-2}-{\mathcal {P}}_{n+1-k}^{n-k-1}-B_{n}^{\left( \alpha ,\beta ,v,k\right) }{\mathcal {P}}_{n-k} ^{n-k-1}, \qquad n \in {\mathbb {N}}, \end{aligned}$$

respectively. Comparing with the expressions of \(B_{n-k}^{\left( \alpha +k ,\beta +k ,v\right) }\) and \(A_{n-k}^{\left( \alpha +k ,\beta +k ,v\right) }\) given by substituting properly in (2.14)-(2.16), the proposition follows. \(\square \)

6 Shift Operators

In this section, we use Pearson equation (5.6) to give explicit lowering and rising operators for the monic n-degree polynomials \(P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \), \(n\ge 0\), defined in (5.1). Moreover, from the existence of the shift operators, we deduce a Rodrigues formula for the sequence of derivatives \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge 0}\), and we find a matrix-valued differential operator for which these matrix-valued polynomials are eigenfunctions. In what follows, we will consider the matrix-valued functions \(W^{(k)}\left( t\right) ,\Phi ^{(k)}\left( t\right) \) and \(\Psi ^{(k)}\left( t\right) \), \(k\in {\mathbb {N}}\) as defined in Theorem  5.4.

For any pair of matrix-valued functions P and Q, we denote

$$\begin{aligned} \left\langle P,Q\right\rangle _{k}=\int _{0}^{1}P\left( t\right) W^{(k)}\left( t\right) Q^{*}\left( t\right) \mathrm{d}t. \end{aligned}$$

Proposition 6.1

Let \(\eta ^{(k)}\) be the first-order matrix-valued right differential operator

$$\begin{aligned} \eta ^{(k)}= \dfrac{\mathrm{d}}{\mathrm{d}t} (\Phi ^{(k)}\left( t\right) )^{*} + (\Psi ^{(k)}\left( t\right) )^{*}. \end{aligned}$$
(6.1)

Then, \(\dfrac{\mathrm{d}}{\mathrm{d}t}:L^{2}\left( W^{(k)}\right) \rightarrow L^{2}\left( W^{(k+1)}\right) \) and \(\eta ^{k}:L^{2}\left( W^{(k+1)}\right) \rightarrow L^{2}\left( W^{(k)}\right) \) satisfy

$$\begin{aligned} \left\langle \frac{\mathrm{d}P}{\mathrm{d}t},Q\right\rangle _{k+1}=-\left\langle P, Q\eta ^{(k)}\right\rangle _{k}. \end{aligned}$$

Proof

From \(\left\langle \dfrac{\mathrm{d}P}{\mathrm{d}t},Q\right\rangle _{k+1}=\displaystyle \int _{0}^{1}\dfrac{\mathrm{d}P(t)}{\mathrm{d}t} W^{(k+1)}(t)Q^{*}(t)\mathrm{d}t,\) integrating by parts and taking into account equalities (5.13) and (5.14) in Remark 5.5, we get,

$$\begin{aligned} \left\langle \dfrac{\mathrm{d}P}{\mathrm{d}t},Q\right\rangle _{k+1}= & {} -\int _{0}^{1}P(t) \dfrac{\mathrm{d}}{\mathrm{d}t}\left( W(t)^{(k+1)}\right) Q^{*}(t)\mathrm{d}t-\int _{0}^{1}P(t) W^{(k+1)}(t)\left( \dfrac{\mathrm{d}Q(t)}{\mathrm{d}t}\right) ^{*}\mathrm{d}t\\= & {} -\int _{0}^{1}P(t)W^{(k)}(t)\Psi ^{(k)}(t)Q^{*}(t)\mathrm{d}t-\int _{0}^{1} P(t) W^{(k)}(t)\Phi ^{(k)}(t) \left( \dfrac{\mathrm{d}Q(t)}{\mathrm{d}t}\right) ^{*}\mathrm{d}t\\= & {} -\int _{0}^{1}P(t)W^{(k)}(t)\left( \Psi ^{(k)}\left( t\right) Q^{*}(t) + \Phi ^{(k)}\left( t\right) \left( \dfrac{\mathrm{d}Q(t)}{\mathrm{d}t}\right) ^{*} \right) Q^{*}(t)\mathrm{d}t\\= & {} -\left\langle P,Q\eta ^{(k)}\right\rangle _{k}. \end{aligned}$$

\(\square \)

Lemma 6.2

The following identity holds true

$$\begin{aligned} I\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}= & {} {\mathcal {C}}_{n}^{k}P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }, \quad n \ge 1, \end{aligned}$$

for a given \(k\ge 0\), where

$$\begin{aligned} {\mathcal {C}}_{n}^{k}= & {} \left( -1\right) ^{n} \left( \alpha +\beta +3+2k+n\right) _{n}\nonumber \\&\begin{pmatrix} \dfrac{\left( \kappa _{v,\beta } +2(k+1+n)\right) }{\left( \kappa _{v,\beta } +2(k+1)\right) } &{} 0 \\ 0 &{} \dfrac{\left( \kappa _{-v,\beta } +2(k+1+n)\right) }{\left( \kappa _{-v,\beta } +2(k+1)\right) } \end{pmatrix} ,\qquad \nonumber \\ n\ge & {} 1.\ \end{aligned}$$
(6.2)

Proof

It holds that \(I\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}\) is a polynomial of degree n. From the definition of the monic sequence of derivatives in (5.1), one has

$$\begin{aligned} \dfrac{\mathrm{d}}{\mathrm{d}t}P_{n+k}^{(\alpha ,\beta ,v,k)}\left( t\right) =nP_{n+k}^{(\alpha ,\beta ,v,k+1)}\left( t\right) . \end{aligned}$$

Thus, Proposition 6.1 implies that \(P_{n+k}^{(\alpha ,\beta ,v,k+1)}\eta ^{(k)}\) is a multiple of \(P_{n+k}^{(\alpha ,\beta ,v,k)}\).

Therefore, applying the raising operators \(\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}\) to \(P_{n+k}^{(\alpha ,\beta ,v,k+n)}=I\), we get a multiple of \(P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }.\) For the leading coefficient \({C}_{n}^{k}\) of the polynomial \(I\eta ^{(k+n-1)}\cdots \eta ^{(k+1)} \eta ^{(k)}\), one obtains the expression

$$\begin{aligned} {\mathcal {C}}_{n}^{k} =\prod _{i=1}^{n}\left( \left( i-1\right) {\mathscr {A}}_{2}^{k+n-i}+{\mathscr {B}}_{1}^{k+n-i}\right) . \end{aligned}$$

The diagonal matrix entries \({\mathscr {A}}_2^k\) and \({\mathscr {B}}_1^k\) are defined in (5.7) and (5.10). Then, by replacing \({\mathscr {B}}_{1}^{k}=\left( \alpha +\beta +4+2k\right) {\mathscr {A}}_{2}^{k}\) in the identity above, we have

$$\begin{aligned} {\mathcal {C}}_{n}^{k}= & {} \prod _{i=1}^{n}\left( \left( 2n+\alpha +\beta +3+2k-i\right) {\mathscr {A}}_{2}^{k+n-i}\right) \\= & {} (-1)^{n}\prod _{i=1}^{n} \left( 2n+\alpha +\beta +3+2k-i\right) \\&\begin{pmatrix} \prod _{i=1}^{n}\tiny {\dfrac{ \left( \kappa _{v,\beta }+2(k+n-i+2)\right) }{\kappa _{v,\beta } +2(k+n-i+1)}} &{} 0 \\ 0 &{} \prod _{i=1}^{n}\tiny {\dfrac{\left( \kappa _{-v,\beta } +2(k+n-i+2)\right) }{\kappa _{-v,\beta }+2(k+n-i+1)}} \end{pmatrix}. \end{aligned}$$

Hence, the proof follows.

Note that \({\mathcal {C}}_{n}^{k}\) is non-singular since \(|\alpha -\beta |<|v|<\alpha +\beta +2\left( k+1\right) .\) \(\square \)

From the proposition and the lemma above, we obtain another expression for the Rodrigues formula.

Proposition 6.3

The polynomials \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge 0}\) satisfy the following Rodrigues formula:

$$\begin{aligned} P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) =\left( {\mathcal {C}}_{n}^{k}\right) ^{-1}\left( \dfrac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}W^{(k+n)}\left( t\right) \right) \left( W^{(k)}\left( t\right) \right) ^{-1}, \quad n\ge 1, \end{aligned}$$

where the matrices \({\mathcal {C}}_n^k\) are given by the expression in (6.2).

Proof

Let Q be a matrix-valued function and \(\eta ^{(k)}\) the raising operator in (6.1), then

$$\begin{aligned} Q\eta ^{(k)}=\dfrac{\mathrm{d}Q}{\mathrm{d}t} (\Phi ^{(k)})^{*}+Q (\Psi ^{(k)})^{*}. \end{aligned}$$

Using the identities (5.13) and (5.14), we obtain

$$\begin{aligned} Q\eta ^{(k)}=\frac{\mathrm{d}}{\mathrm{d}t}\left( Q W^{(k+1)}\right) \left( W^{(k)}\right) ^{-1}. \end{aligned}$$

Iterating, it gives

$$\begin{aligned} Q\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}=\dfrac{ \mathrm{d}^{n}}{\mathrm{d}t^{n}}\left( Q W^{(k+n)}\right) \left( W^{(k)}\right) ^{-1}. \end{aligned}$$

Now, taking \(Q\left( t\right) =I\,\) and using Lemma 6.2 we have

$$\begin{aligned} P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }(t)=\left( {\mathcal {C}}_{n}^{k}\right) ^{-1}\dfrac{ \mathrm{d}^{n}}{\mathrm{d}t^{n}}\left( W^{(k+n)}(t)\right) \left( W^{(k)}(t)\right) ^{-1}. \end{aligned}$$

\(\square \)

Corollary 6.4

Let \(W^{(k)}\left( t\right) \) be the weight matrix (5.5). Then, the differential operator

$$\begin{aligned} E^{(k)}= \frac{\mathrm{d}}{\mathrm{d}t}\circ \eta ^{(k)}=\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}(\Phi ^{(k)}\left( t\right) )^{*} + \frac{\mathrm{d}}{\mathrm{d}t}(\Psi ^{(k)}\left( t\right) )^{*} \end{aligned}$$
(6.3)

is symmetric with respect to \(W^{(k)}\left( t\right) \) for all \(k\in {\mathbb {N}} _{0}.\) Moreover, the polynomials \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) } \right) _{n\ge 0}\) are eigenfunctions of the operator \(E^{(k)}\) with eigenvalue

$$\begin{aligned} \Lambda _{n}\left( E^{(k)}\right) =n(n+\alpha +\beta +3+2k){\mathscr {A}}^{k}_{2}, \end{aligned}$$

where \({\mathscr {A}}^{k}_{2}\) is given by (5.7).

Proof

From Proposition 6.1 and the factorization \(E^{(k)}=\dfrac{\mathrm{d}}{\mathrm{d}t}\circ \eta ^{(k)}\), it follows directly that \(E^{(k)}\) is symmetric with respect to \( W^{(k)}.\)

The eigenvalue is obtained by looking at the leading coefficients of \(\Phi ^{(k)}(t)\) and \(\Psi ^{(k)}(t)\) in (5.6). Thus, we obtain \(\Lambda _{n}\left( E^{(k)}\right) =n\left( n-1\right) {\mathscr {A}}^{k}_{2}+n{\mathscr {B}}^{k}_{1}=n(n+\alpha +\beta +3+2k){\mathscr {A}}^{k}_{2}\). \(\square \)

Remark 6.5

The operators \(E^{(k)}\) and \(D^{\left( k\right) }\) in (5.2) commute. This result follows from the fact that the corresponding eigenvalues \(\Lambda _{n}\left( E^{(k)}\right) \) and \(\Lambda ^{(k)} _{n+k}\) in (5.3) commute, and the linear map that assigns to each differential operator in the algebra of differential operators \(D(W^{(k)})\) its corresponding sequence of eigenvalues, is an isomorphism (see [33, Propositions 2.6 and 2.8]).

Remark 6.6

The Darboux transform \({\widetilde{E}}^{(k)}= \eta ^{(k)} \circ \dfrac{\mathrm{d}}{\mathrm{d}t}\) of the operator \(E^{(k)}\) is not symmetric with respect to \(W^{(k)}.\) Moreover, it is symmetric with respect to \(W^{(k+1)}.\) Indeed,

$$\begin{aligned} \eta ^{(k)} \circ \dfrac{\mathrm{d}}{\mathrm{d}t}= & {} \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} \left( \Phi ^{(k)}\left( t\right) \right) ^{*} +\frac{d }{\mathrm{d}t}\left( \frac{d }{\mathrm{d}t} \left( \Phi ^{(k)}\left( t\right) \right) ^{*} +\frac{\mathrm{d}}{\mathrm{d}t}\left( \Psi ^{(k)}\left( t\right) \right) ^{*} \right) +\frac{\mathrm{d}}{\mathrm{d}t} \left( \Psi ^{(k)}\left( t\right) \right) ^{*}\\= & {} \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} \left( {\mathscr {A}}^{k}_{2}t^{2}+{\mathscr {A}}^{k}_{1}t+{\mathscr {A}}^{k}_{0}\right) ^{*} +\frac{\mathrm{d}}{\mathrm{d}t}\left( \left( 2{\mathscr {A}}^{k}_{2}+{\mathscr {B}}^{k}_{1}\right) t+{\mathscr {A}}^{k}_{1}+{\mathscr {B}}^{k}_{0}\right) ^{*}+({\mathscr {B}}^{k}_{1})^{*}. \end{aligned}$$

In fact, if we substitute the coefficient of the second derivative in the first symmetry condition in (2.2), we obtain

$$\begin{aligned} W^{(k)}(t)\left( {\mathscr {A}}^{k}_{2}t^{2}+{\mathscr {A}}^{k}_{1}t+{\mathscr {A}}^{k}_{0}\right) =\left( {\mathscr {A}}^{k}_{2}t^{2}+{\mathscr {A}}^{k}_{1}t+{\mathscr {A}}^{k}_{0}\right) ^{*}W^{(k)}\left( t\right) , \end{aligned}$$

which does not hold. Taking the main coefficient \(W^{(k)}_2\) of \({{\widetilde{W}}}^{(\alpha ,\beta ,v,k)}\) in (5.5), one has in particular

$$\begin{aligned} W^{k}_{2}{\mathscr {A}}^{k}_{1}-\left( {\mathscr {A}}^{k}_{1}\right) ^{*}W^{k}_{2}=\frac{4v(\alpha +\beta +2(k+1))}{ \left( \kappa _{-v,\beta }+2(k+1)\right) (\kappa _{v,\beta }+2(k+1))} \begin{pmatrix} 0 &{} 1 \\ -1 &{} 0 \end{pmatrix} \ne \mathbf {0.} \end{aligned}$$

The second statement follows from Proposition 6.1.

7 The Algebra \(D\left( W\right) \)

In this section, we will discuss some properties of the structure of the algebra of matrix differential operators having as eigenfunctions a sequence of polynomials \(\left( P_{n}\right) _{n\ge 0}\), orthogonal with respect to the weight matrix \(W=W^{\left( \alpha ,\beta ,v\right) }\), i.e.,

$$\begin{aligned} D\left( W\right) =\left\{ D:P_{n}D=\Lambda _{n}\left( D\right) P_{n} ,\quad \Lambda _{n}\left( D\right) \in { {\mathbb {C}} }^{N\times N}\text { for all }n\ge 0\right\} . \end{aligned}$$

The definition of D(W) does not depend on the particular sequence of orthogonal polynomials (see [33, Corollary 2.5]).

Theorem 7.1

Consider the weight matrix function \(W=W^{(\alpha ,\beta ,v)}\)(t). Then, the differential operators of order at most two in D(W) are of the form

$$\begin{aligned} D=\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\left( {\mathcal {A}}_{2}t^{2}+{\mathcal {A}}_{1}t+{\mathcal {A}}_{0}\right) +\frac{\mathrm{d}}{\mathrm{d}t}\left( {\mathcal {B}}_{1}t+{\mathcal {B}}_{0}\right) +{\mathcal {C}}_{0}, \end{aligned}$$
(7.1)

where

$$\begin{aligned} {\mathcal {A}}_{2}= & {} \begin{pmatrix} a &{} c \\ b &{} d \end{pmatrix} \,,\quad a,b,c,d\in {\mathbb {C}}, \\ {\mathcal {A}}_{1}= & {} \frac{1}{2v}\left[ \begin{pmatrix} -2va&{}(a-d)\kappa _{-v,-\beta }\\ (a-d)\kappa _{v,-\beta }&{}-2vd \end{pmatrix} +b\kappa _{-v,-\beta }\begin{pmatrix}-1&{}0\\ 2&{}1 \end{pmatrix}+c\kappa _{v,-\beta }\begin{pmatrix}-1&{}-2\\ 0&{}1 \end{pmatrix}\right] , \\ {\mathcal {A}}_{0}= & {} \dfrac{ (a-d)\kappa _{v,-\beta }\kappa _{-v,-\beta } +b \kappa _{-v,-\beta } ^{2}-c\kappa _{v,-\beta } ^{2} }{ 4v^{2}} \begin{pmatrix} -1 &{} -1 \\ 1 &{} 1 \end{pmatrix} ,\\ {\mathcal {B}}_{1}= & {} \begin{pmatrix} a\left( \alpha +\beta +4\right) &{} \left( \kappa _{-v,\beta } +4\right) c \\ \left( \kappa _{v,\beta } +4\right) b &{} \left( \alpha +\beta +4\right) d \end{pmatrix}, \\ {\mathcal {B}}_{0}= & {} \frac{1}{4v}\left[ a\begin{pmatrix} -4((\alpha +1) v-\kappa _{-v,-\beta }) &{} \kappa _{-v,-\beta } \left( \kappa _{-v,\beta } +6\right) \\ \kappa _{v,-\beta } \left( \kappa _{v,\beta } +2\right) &{} 0 \end{pmatrix}\right. \\&+ b\kappa _{-v,-\beta }\begin{pmatrix} -\left( \kappa _{v,\beta } +2\right) &{}0 \\ 2 \left( \kappa _{v,\beta } +4\right) &{} \kappa _{v,\beta } +6 \end{pmatrix} \\&+\,\, c\kappa _{v,-\beta }\begin{pmatrix} - \left( \kappa _{-v,\beta } +6\right) &{} -2\left( \kappa _{-v,\beta } +4\right) \\ 0 &{} \kappa _{-v,\beta } +2 \end{pmatrix}\\&\left. +\,\, d\begin{pmatrix} 0 &{}-\kappa _{-v,-\beta }\left( \kappa _{-v,\beta } +2\right) \\ -\kappa _{v,-\beta } \left( \kappa _{v,\beta } +6\right) &{} -4((\alpha +1)v+\kappa _{v,-\beta }) \end{pmatrix}\right] , \\ {\mathcal {C}}_{0}= & {} \dfrac{1}{4}(\kappa _{v,\beta }+4 )(\kappa _{v,\beta }+2)\\&\begin{pmatrix} a\dfrac{\left( \kappa _{-v,\beta } +4\right) }{\kappa _{v,\beta }+4} -d\dfrac{ \kappa _{-v,\beta }+2}{\kappa _{v,\beta } +2} &{}c\dfrac{\left( \kappa _{-v,\beta } +4\right) (\kappa _{-v,\beta } +2)}{(\kappa _{v,\beta }+4 ) (\kappa _{v,\beta } +2)} \\ b &{} 0 \end{pmatrix} +eI,\quad e\in {\mathbb {C}}. \end{aligned}$$

Proof

Let \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) be the monic sequence of orthogonal polynomials with respect to \(W^{\left( \alpha ,\beta ,v\right) }.\) The polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }\) is an eigenfunction of the operator D (7.1) if

$$\begin{aligned} P_{n}^{\left( \alpha ,\beta ,v\right) }D=\Lambda _{n}P_{n}^{\left( \alpha ,\beta ,v\right) }, \end{aligned}$$

with \(\Lambda _{n}=n\left( n-1\right) {\mathcal {A}}_{2}+n{\mathcal {B}}_{1}+{\mathcal {C}}_{0}.\) This equation holds if and only if

$$\begin{aligned} \begin{array}{c} k(k-1){\mathcal {P}}_{n}^{k}{\mathcal {A}}_{2}+(k+1)k{\mathcal {P}}_{n}^{k+1}{\mathcal {A}}_{1}+\left( k+2\right) \left( k+1\right) {\mathcal {P}}_{n}^{k+2} {\mathcal {A}}_{0}+k{\mathcal {P}}_{n}^{k}{\mathcal {B}}_{1} \\ +\left( k+1\right) {\mathcal {P}}_{n}^{k+1}{\mathcal {B}}_{0}+{\mathcal {P}}_{n}^{k}{\mathcal {C}}_{0}-\left( n\left( n-1\right) {\mathcal {A}}_{2}+n{\mathcal {B}}_{1}+{\mathcal {C}}_{0}\right) {\mathcal {P}}_{n}^{k} =0, \end{array} \end{aligned}$$
(7.2)

where \({\mathcal {P}}_{n}^{k}\) denotes de \(k-th\) coefficient of \(P_{n}, \ k=0,1,2,\ldots n.\)

To prove the theorem, we need to solve equation (7.2) for \(k=n-1\) and \(k=n-2\) to find relations between the parameters of the matrix-valued coefficients \( {\mathcal {A}}_{2},{\mathcal {A}}_{1},{\mathcal {A}}_{0},{\mathcal {B}}_{1},{\mathcal {B}}_{0}\) and \({\mathcal {C}}_{0}\).

We obtain the explicit expressions of \({\mathcal {P}}_{n}^{n-1}\) and \({\mathcal {P}}_{n}^{n-2}\) by substituting \(k=0\) in the equalities (5.15) and (5.16), respectively.

From equation (7.2) for \(k=n-1\), we get

$$\begin{aligned} \left( \mathcal {P}_{n}^{n-1}\Lambda _{n}-\Lambda _{n}\mathcal {P}_{n}^{n-1}\right) -\mathcal {P}_{n}^{n-1}\left( 2\left( n-1\right) A_{2}+B_{1}\right) +\left[ n\left( n-1\right) A_{1}+nB_{0}\right] =0 . \nonumber \\ \end{aligned}$$
(7.3)

Multiplying equation (7.3) by

$$\begin{aligned} \frac{v\left( \alpha +\beta +2\left( n+1\right) \right) \left( \kappa _{v,\beta }+2\left( n+1\right) \right) \left( \kappa _{-v,\beta }+2\left( n+1\right) \right) }{n}, \end{aligned}$$

one obtains a matrix polynomial on n of degree four, where each coefficient must be equal to zero. From the expression of the coefficient of \(n^{4}\), we get the expression for \({\mathcal {A}}_{1}\) given above, and from the expression of the coefficient of \(n^{3}\), we get \({\mathcal {B}}_{0}\) in terms of \({\mathcal {A}}_{2}\) and \({\mathcal {B}}_{1}\). Looking at the entries \(\left( 1,1\right) ,\left( 1,2\right) \) and \(\left( 2,2\right) \) of the coefficient of \(n^{2}\) and the fact that \(\kappa _{v,-\beta }\) and \(\kappa _{-v,-\beta }\) are nonzero, we get \(\left( {\mathcal {C}}_{0}\right) _{12},\) \(\left( {\mathcal {C}}_{0}\right) _{11}\) and \(\left( {\mathcal {B}}_{1}\right) _{12}\), respectively, in terms of \({\mathcal {A}}_{2}\) and the other entries of \({\mathcal {C}}_{0}\) and \({\mathcal {B}}_{1}\). Finally, looking at the coefficient of n, we get the values of \(\left( {\mathcal {B}}_{1}\right) _{11},\) \( \left( {\mathcal {B}}_{1}\right) _{21},\) \(\left( {\mathcal {B}}_{1}\right) _{22}\) and \(\left( {\mathcal {C}}_0\right) _{21}\); consequently, we obtain the values of \({\mathcal {B}}_{1}\), \({\mathcal {B}}_{0}\) and \({\mathcal {C}}_{0}\) written above.

Analogously, from equation (7.2) for \(k=n-2\), we obtain

$$\begin{aligned} \left( \mathcal {P}_{n}^{n-2}\Lambda _{n-2}-\Lambda _{n}\mathcal {P}_{n}^{n-2}\right) +\left( n-1\right) \mathcal {P}_{n}^{n-1}\left( \left( n-2\right) \ A_{1}+B_{0}\right) +n\left( n-1\right) A_{0}=0. \end{aligned}$$
(7.4)

Multiplying equation (7.4) by

$$\begin{aligned}&v^{2}(\alpha +\beta +2n+1)(\alpha +\beta +2\left( n+1\right) )(\kappa _{v,\beta }+2\left( n+1\right) )\\&\quad (\kappa _{v,\beta }+\alpha +\beta +2\left( 2n+1\right) )(\kappa _{-v,\beta }+\alpha +\beta +2\left( 2n+1\right) )(\kappa _{-v,\beta }+2\left( n+1\right) ), \end{aligned}$$

one obtains a matrix polynomial on n of degree eight, where each coefficient must be equal to zero. We get the expression of \( {\mathcal {A}}_{0}\) from the coefficient of \(n^{8}\).

Thus, if we replace the expressions of \({\mathcal {A}}_{0},\ {\mathcal {A}}_{1},\ {\mathcal {B}}_{1},\ {\mathcal {B}}_{0}\) and the entries \(\left( 1,1\right) ,\) \(\left( 1,2\right) \) and \(\left( 2,1\right) \) of \({\mathcal {C}}_0\) in (7.3) and (7.4), both equations hold true.

Let \({\mathcal {D}}_{2}\) be the complex vector space of differential operators in D(W) of order at most two. We have already proved that \(\dim {\mathcal {D}}_{2}\le 5\).

If D is symmetric, then \(D\in D\left( W\right) \). Using symmetry equations in (2.2), one verifies that the operator D in (7.1) is symmetric with respect to W if and only if \(a,d,e\in {\mathbb {R}} \) and condition

$$\begin{aligned} b\dfrac{\kappa _{v,\beta }+2}{\kappa _{v,-\beta }}=-{\overline{c}}\dfrac{\kappa _{-v,\beta }+2}{\kappa _{-v,-\beta }} \end{aligned}$$
(7.5)

holds true. Indeed, writing \(W(t)=W^{\left( \alpha ,\beta ,v\right) }=W_2t^2+W_1t+W_0\), from the first equation of symmetry in (2.2), we have that \(W_{2}{\mathcal {A}}_{2}^{*}-{\mathcal {A}}_{2}W_{2}=0\), i.e.,

$$\begin{aligned} \begin{pmatrix} 2\textit{Im}\left( a\right) \dfrac{(\kappa _{v,\beta }+2)}{\kappa _{v,-\beta }} &{}-{\overline{b}}\dfrac{(\kappa _{v,\beta }+2)}{\kappa _{v,-\beta }}-c\dfrac{(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }} \\ b\dfrac{(\kappa _{v,\beta }+2)}{ \kappa _{v,-\beta }}+{\overline{c}}\dfrac{(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }} &{} -2 \textit{Im}\left( d\right) \dfrac{(\kappa _{v,\beta }+2)}{\kappa _{-v,-\beta }} \end{pmatrix} =0, \end{aligned}$$
(7.6)

where Im(z) denotes the imaginary part of a complex number z. Then, since \(\kappa _{v,\beta }+2>0\) because of the restrictions of the parameters \(\alpha \), \(\beta \) and v in the definition of \(W^{(\alpha ,\beta ,v)}\) in (2.4), to verify (7.6), one needs to have \(a,d\in {\mathbb {R}} \) and condition (7.5).

In addition, from the third symmetry equation (2.2), we have that \(e\in {\mathbb {R}}\). Thus, there exists at least five linearly independent symmetric operators of order at most two in D(W). Therefore, \(\dim {\mathcal {D}}_{2}=5\). \(\square \)

By taking as the only nonzero parameters \(a=1\) and \(d=1\), respectively, in the expression of the operator in (7.1), we write the operators:

$$\begin{aligned} D_{1}= & {} \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\left[ \begin{pmatrix} t^{2}-t&{}\dfrac{\kappa _{-v,-\beta }}{2v}t\\ \dfrac{\kappa _{v,-\beta }}{2v}t&{}0\end{pmatrix} +\dfrac{\kappa _{v,-\beta } \kappa _{-v,-\beta }}{4v^{2}} \begin{pmatrix} -1&{}-1\\ 1&{}1 \end{pmatrix}\right] \\&+ \frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix} \left( \alpha +\beta +4\right) t+\dfrac{\kappa _{-v,-\beta }}{v}-(\alpha +1) &{} \dfrac{\kappa _{-v,-\beta } \left( \kappa _{-v,\beta } +6\right) }{4v}\\ \dfrac{\kappa _{v,-\beta } \left( \kappa _{v,\beta }+2\right) }{4v}&{} 0 \end{pmatrix}\\&+ \begin{pmatrix} \dfrac{\left( \kappa _{-v,\beta } +4\right) \left( \kappa _{v,\beta } +2\right) }{4} &{} 0 \\ 0 &{} 0 \end{pmatrix},\\ D_{2}= & {} \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\left[ \begin{pmatrix}0&{}-\dfrac{\kappa _{-v,-\beta }}{2v}t\\ -\dfrac{\kappa _{v,-\beta }}{2v}t&{}t^2-t\end{pmatrix} -\dfrac{\kappa _{-v,-\beta }\kappa _{v,-\beta }}{ 4v^{2}} \begin{pmatrix} -1&{}-1\\ 1&{}1 \end{pmatrix}\right] \\&+ \frac{\mathrm{d}}{\mathrm{d}t} \begin{pmatrix} 0 &{} -\dfrac{\kappa _{-v,-\beta } ( \kappa _{-v,\beta }+2) }{4v} \\ -\dfrac{\kappa _{v,-\beta } \left( \kappa _{v,\beta }+6\right) }{ 4v}&{} \left( \alpha +\beta +4\right) t-\dfrac{\kappa _{v,-\beta }}{v}-(\alpha +1) \end{pmatrix}\\&+ \begin{pmatrix} -\dfrac{1}{4}\left( \kappa _{v,\beta } +4\right) \left( \kappa _{-v,\beta } +2\right) &{} 0 \\ 0 &{} 0 \end{pmatrix}. \end{aligned}$$

Analogously, by choosing as nonzero parameters \(c =1\), \(b=-\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}\) and \(c =i\),

\(b=i\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}\), respectively, we define the operators:

$$\begin{aligned} D_{3}&=\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} \left\{ \left( \begin{array}{cc} 0 &{}1 \\ -\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}&{}0 \end{array} \right) t^2 +\dfrac{\kappa _{v,-\beta }}{\kappa _{v,\beta }+2 }\left[ \left( \begin{array}{cc} -1 &{} -\dfrac{\kappa _{v,\beta }+2 }{v} \\ -\dfrac{\kappa _{-v,\beta }+2 }{v}&{}1 \end{array} \right) t\right. \right. \\&\quad +\dfrac{1}{2}\left. \left. \left( \dfrac{(\alpha +\beta +2)(\alpha -\beta )}{v^2}+1 \right) \begin{pmatrix}1&{}1\\ -1&{}-1 \end{pmatrix} \right] \right\} \\&\quad + \dfrac{\mathrm{d}}{\mathrm{d}t}\left[ \begin{pmatrix} 0 &{} \kappa _{-v,\beta }+4 \\ -\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)(\kappa _{v,\beta }+4)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)} &{} 0 \end{pmatrix} t \right. \\&\quad +\dfrac{\kappa _{v,-\beta }}{v} \left. \begin{pmatrix} -1 &{} -\dfrac{(\kappa _{-v,\beta }+4)}{2} \\ -\dfrac{(\kappa _{v,\beta }+4)(\kappa _{-v,\beta }+2)}{2(\kappa _{v,\beta }+2)} &{} -\dfrac{\kappa _{-v,\beta }+2}{\kappa _{v,\beta }+2} \end{pmatrix} \right] \\&\quad + \dfrac{1}{4}(\kappa _{-v,\beta }+2) \begin{pmatrix} 0 &{} \kappa _{-v,\beta }+4 \\ -\dfrac{(\kappa _{v,\beta }+4)\kappa _{v,-\beta }}{\kappa _{-v,-\beta }} &{} 0 \end{pmatrix},\\ iD_{4}&=\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} \left\{ \left( \begin{array}{cc} 0 &{} -1 \\ -\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}&{}0 \end{array} \right) t^2+\dfrac{\kappa _{v,-\beta }}{v(\kappa _{v,\beta }+2)}\left[ \left( \begin{array}{cc} \alpha +\beta +2 &{} \kappa _{v,\beta }+2 \\ -(\kappa _{-v,\beta }+2)&{}-(\alpha +\beta +2) \end{array}\right) t \right. \right. \\&\quad \left. \left. - (\alpha +1) \begin{pmatrix}1&{}1\\ -1&{}-1 \end{pmatrix} \right] \right\} +\dfrac{\mathrm{d}}{\mathrm{d}t} \left[ \begin{pmatrix} 0 &{} -(\kappa _{-v,\beta }+4) \\ -\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)(\kappa _{v,\beta }+4)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)} &{}0 \end{pmatrix}t \right. \\&\quad + \dfrac{\kappa _{v,-\beta }}{2v} \left. \begin{pmatrix} \kappa _{-v,\beta }+4 &{} \kappa _{-v,\beta }+4 \\ -\dfrac{(\kappa _{v,\beta }+4)(\kappa _{-v,\beta }+2)}{(\kappa _{v,\beta }+2)}&{}-\dfrac{(\kappa _{v,\beta }+4)(\kappa _{-v,\beta }+2)}{(\kappa _{v,\beta }+2)} \end{pmatrix}\right] \\&\quad -\dfrac{\kappa _{-v,\beta }+2}{4}\begin{pmatrix} 0 &{} \kappa _{-v,\beta }+4\\ \dfrac{(\kappa _{v,\beta }+4)\kappa _{v,-\beta }}{\kappa _{-v,-\beta }} &{} 0 \end{pmatrix}. \end{aligned}$$

One has the following:

Corollary 7.2

The set of symmetric operators \(\left\{ D_{1},D_{2},D_{3},D_{4},I\right\} \) is a basis of the space of differential operators of order at most two in D(W). Moreover, the corresponding eigenvalues for the differential operators \( D_{1},D_{2},D_{3}\) and \(D_{4}\) are

$$\begin{aligned} \Lambda _{n}\left( D_{1}\right)= & {} \frac{1}{4} \begin{pmatrix} \left( \kappa _{v,\beta }+2(n+1) \right) \left( \kappa _{-v,\beta }+2(n+2) \right) &{} 0 \\ 0 &{} 0 \end{pmatrix}, \\ \Lambda _{n}\left( D_{2}\right)= & {} \begin{pmatrix} -\dfrac{1}{4}\left( \kappa _{-v,\beta }+2 \right) \left( \kappa _{v,\beta }+4 \right) &{} 0 \\ 0 &{} \left( n+\alpha +\beta +3\right) n \end{pmatrix}, \\ \Lambda _{n}\left( D_{3}\right)= & {} \dfrac{1}{4}\left( \kappa _{-v,\beta }+2(1+n) \right) \left( \kappa _{-v,\beta }+ 2(2+n) \right) \begin{pmatrix} 0 &{} 1 \\ 0&{} 0 \end{pmatrix}\\&-\dfrac{\left( \kappa _{v,\beta }+2(1+n)\right) \left( \kappa _{v,\beta }+2(2+n)\right) \left( \kappa _{-v,\beta }+2\right) \kappa _{v,-\beta } }{ 4\kappa _{-v,-\beta } \left( \kappa _{v,\beta }+2\right) } \begin{pmatrix} 0 &{} 0 \\ 1&{} 0 \end{pmatrix}, \\ \Lambda _{n}\left( iD_{4}\right)= & {} -\dfrac{1}{4}\left( \kappa _{-v,\beta }+2(1+n) \right) \left( \kappa _{-v,\beta }+ 2(2+n) \right) \begin{pmatrix} 0 &{} 1 \\ 0&{} 0 \end{pmatrix}\\&-\dfrac{\left( \kappa _{v,\beta }+2(1+n)\right) \left( \kappa _{v,\beta }+2(2+n)\right) \left( \kappa _{-v,\beta }+2\right) \kappa _{v,-\beta } }{ 4\kappa _{-v,-\beta } \left( \kappa _{v,\beta }+2\right) } \begin{pmatrix} 0 &{} 0 \\ 1&{} 0 \end{pmatrix}. \end{aligned}$$

Corollary 7.3

The differential operators appearing in (2.7) and (6.3) are \( D^{\left( \alpha ,\beta ,v\right) }=-D_{1}-D_{2}\) and \(E^{(0)}=-\dfrac{\kappa _{v,\beta }+4}{\kappa _{v,\beta }+2}D_{1}-\dfrac{\kappa _{-v,\beta }+4}{\kappa _{-v,\beta }+2}D_{2}\) respectively.

Corollary 7.4

There are no operators of order one in the algebra D(W).

Proof

Suppose that there exists a right differential operator of order one, such that \(D= aD_{1}+bD_{2}+cD_{3}+d(iD_{4})+eI\), with \(a,b,c,d,e \in {\mathbb {R}}\). Equating to zero the matrix-valued coefficient of \(\dfrac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\), one obtains:

$$\begin{aligned} \begin{pmatrix} a &{} c-di \\ -\dfrac{\kappa _{v,-\beta }\left( \kappa _{-v,\beta }+2 \right) }{\kappa _{-v,-\beta }\left( \kappa _{v,\beta }+2\right) }(c+di) &{} b \end{pmatrix} ={\mathbf {0}}. \end{aligned}$$

Therefore \(a=b=c=d=0.\) \(\square \)

Corollary 7.5

The algebra \(D\left( W\right) \) is not commutative.

Proof

Using the isomorphism between the algebra of differential operators and the algebra of matrix-valued functions of n generated by the eigenvalues going with this operators, we have that \(D_{1}D_{3}\ne D_{3}D_{1}\) since \(\Lambda _{n}\left( D_{1}\right) \Lambda _{n}\left( D_{3}\right) \ne \Lambda _{n}\left( D_{3}\right) \Lambda _{n}\left( D_{1}\right) \). \(\square \)

Remark 7.6

In [42], the authors study the algebra \(D\left( W^{\left( p,q\right) }\right) \), where \(W^{\left( p,q\right) }\) is, for \(p\ne \dfrac{q}{2}\), the irreducible weight matrix

$$\begin{aligned}&W^{\left( p,q\right) }(t)\\&\quad =\left( t\left( 1-t\right) \right) ^{\dfrac{q-2}{2}} \begin{pmatrix} 2pt^{2}-2pt+\dfrac{q}{2} &{} qt-\dfrac{q}{2} \\ qt-\dfrac{q}{2} &{} -2\left( p-q\right) t^{2}+2\left( p-q\right) t+\dfrac{q}{2} \end{pmatrix},\quad t \in [0,1]. \end{aligned}$$

Let us denote by \(D_{1}^{\left( p,q\right) },D_{2}^{\left( p,q\right) },D_{3}^{\left( p,q\right) }\) and \(D_{4}^{\left( p,q\right) }\) the differential operators appearing in [42]. Then, taking \(\alpha =\beta =\dfrac{q}{2}-1\) in (2.4) and writing \(v=2p-q\),

we have the following relations with the operators \(D_i,\ i=1,2,3,4\), defined above:

$$\begin{aligned} D_{1}^{\left( p,q\right) }= & {} D_{1},\quad D_{2}^{\left( p,q\right) }=D_{2}+\left( q-p\right) \left( p+1\right) I ,\\ D_{3}^{\left( p,q\right) }= & {} \frac{p}{2(q-p) }\left( D_{3}+iD_{4}\right) ,\quad D_{4}^{\left( p,q\right) }=\frac{ 1}{2}\left( D_{3}-iD_{4}\right) . \end{aligned}$$