Abstract
In this paper Geronimus transformations for matrix orthogonal polynomials in the real line are studied. The orthogonality is understood in a broad sense, and is given in terms of a nondegenerate continuous sesquilinear form, which in turn is determined by a quasidefinite matrix of bivariate generalized functions with a well defined support. The discussion of the orthogonality for such a sesquilinear form includes, among others, matrix Hankel cases with linear functionals, general matrix Sobolev orthogonality and discrete orthogonal polynomials with an infinite support. The results are mainly concerned with the derivation of Christoffel type formulas, which allow to express the perturbed matrix biorthogonal polynomials and its norms in terms of the original ones. The basic tool is the Gauss–Borel factorization of the Gram matrix, and particular attention is paid to the non-associative character, in general, of the product of semi-infinite matrices. The Geronimus transformation, in where a right multiplication by the inverse of a matrix polynomial and an addition of adequate masses is performed, is considered. The resolvent matrix and connection formulas are given. Two different methods are developed. A spectral one, based on the spectral properties of the perturbing polynomial, and constructed in terms of the second kind functions. This approach requires the perturbing matrix polynomial to have a nonsingular leading term. Then, using spectral techniques and spectral jets, Christoffel–Geronimus formulas for the transformed polynomials and norms are presented. For this type of transformations, the paper also proposes an alternative method, which does not require of spectral techniques, that is valid also for singular leading coefficients. When the leading term is nonsingular a comparative of between both methods is presented. The nonspectral method is applied to unimodular Christoffel perturbations, and a simple example for a degree one massless Geronimus perturbation is given.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Perturbations of a linear functional u in the linear space of polynomials with real coefficients have been extensively studied in the theory of orthogonal polynomials on the real line (scalar OPRL). In particular, when you deal with the positive definite case, and linear functionals associated with probability measures supported in an infinite subset of the real line are considered, such perturbations provide interesting information in the framework of Gaussian quadrature rules taking into account the perturbation yields new nodes and Christoffel numbers, see [25, 26]. Three perturbations have attracted the interest of the researchers. Christoffel perturbations, that appear when you consider a new functional \(\hat{u}= p(x) u\), where p(x) is a polynomial, were studied in 1858 by the German mathematician Christoffel in [13], in the framework of Gaussian quadrature rules. He found explicit formulas relating the corresponding sequences of orthogonal polynomials with respect to two measures, the Lebesgue measure \({\text {d}}\mu \) supported in the interval \((-1,1)\) and \(d\hat{\mu }(x)= p(x) d\mu (x)\), with \(p(x)=(x-q_1)\cdots (x-q_N)\) a signed polynomial in the support of \({\text {d}}\mu \), as well as the distribution of their zeros as nodes in such quadrature rules. Nowadays, these are called Christoffel formulas, and can be considered a classical result in the theory of orthogonal polynomials which can be found in a number of textbooks, see for example [12, 26, 71]. Explicit relations between the corresponding sequences of orthogonal polynomials have been extensively studied, see [25], as well as the connection between the corresponding monic Jacobi matrices in the framework of the so-called Darboux transformations based on the LU factorization of such matrices [9]. In the theory of orthogonal polynomials, connection formulas between two families of orthogonal polynomials allow to express any polynomial of a given degree n as a linear combination of all polynomials of degree less than or equal to n in the second family. A noteworthy fact regarding the Christoffel finding is that in some cases the number of terms does not grow with the degree n but remarkably, and on the contrary, remain constant, equal to the degree of the perturbing polynomial. See [25, 26] for more on the Christoffel type formulas as well as [10], where Darboux transformations for measures supported on the unit circle are deeply studied.
Geronimus transformation appears when you are dealing with perturbed functionals v defined by \(p(x) v=u,\) where p(x) is a polynomial. Such a kind of transformations were used by the Russian mathematician Geronimus, see [35], in order to have a nice proof of a result by Hahn [46] concerning the characterization of classical orthogonal polynomials (Hermite, Laguerre, Jacobi, and Bessel) as those orthogonal polynomials whose first derivatives are also orthogonal polynomials, for an English account of Geronimus’ paper [35] see [40]. Again, as happened for the Christoffel transformation, within the Geronimus transformation one can find Christoffel type formulas, now in terms of the second kind functions, relating the corresponding sequences of orthogonal polynomials, see for example the work of Maroni [55] for a perturbation of the type \(p(x)=x-a\).
Was Krein in [48] the first to discuss matrix orthogonal polynomials, for a review on the subject see [15]. The great activity in this scientific field has produced a vast bibliography, treating among other things subjects like inner products defined on the linear space of polynomials with matrix coefficients or aspects as the existence of the corresponding sequences of matrix orthogonal polynomials in the real line, see [18, 19, 56, 63, 70]) and their applications in Gaussian quadrature for matrix-valued functions [69], scattering theory [5, 34] and system theory [24]. The seminal paper [20] gave the key for further studies in this subject and, subsequently, some relevant advances have been achieved in the study of families of matrix orthogonal polynomials associated to second order linear differential operators as eigenfunctions and their structural properties [18, 21, 41, 42]. In [11] sequences of orthogonal polynomials satisfying a first order linear matrix differential equation were found, which is a remarkable difference with the scalar scenario, where such a situation does not appear. The spectral problem for second order linear difference operators with polynomial coefficients has been considered in [4]. Therein four families of matrix orthogonal polynomials (as matrix relatives of Charlier, Meixner, Krawtchouk scalar polynomials and another one that seems not have any scalar relative) are obtained as illustrative examples of the method described therein.
We continue this introduction with two introductory subsections. One is focused on the spectral theory of matrix polynomials, we follow [39]. The other is a basic background on matrix orthogonal polynomials, see [15]. In the second section we extend the Geronimus transformations to the matrix realm, and find connection formulas for the biorthogonal polynomials and the Christoffel–Darboux kernels. These developments allow for the finding of the Christoffel–Geronimus formula for matrix perturbations of Geronimus type. As we said we present two different schemes. In the first one, which can be applied when the perturbing polynomial has a nonsingular leading coefficient, we express the perturbed objects in terms of spectral jets of the primitive second kind functions and Christoffel–Darboux kernels. We present a second approach, applicable even when the leading coefficient is singular. For each method we consider two different situations, the less interesting case of biorthogonal polynomials of degree less than the degree of the perturbing polynomial, and the much more interesting situation whence the degrees of the families of biorthogonal polynomials are greater than or equal to the degree of the perturbing polynomial. To end the section, we compare spectral versus nonspectral methods and present a number of applications. In particular, we deal with unimodular polynomial matrix perturbations and degree one matrix Geronimus transformations. Notice that in [6] we have extended these results to the matrix linear spectral case, i.e. to Uvarov–Geronimus–Christoffel formulas for certain matrix rational perturbations. Finally, an appendix with the definitions of Schur complements and quasideterminants is also included in order to have a perspective of these basic tools in the theory of matrix orthogonal polynomials.
1.1 On spectral theory of matrix polynomials
Here we give some background material regarding the spectral theory of matrix polynomials [39, 52].
Definition 1
Let \(A_0, A_1\ldots ,A_N\in {\mathbb {C}}^{p\times p}\) be square matrices of size \(p\times p\) with complex entries and \(A_N\ne 0_p\). Then
is said to be a matrix polynomial of degree N, \(\deg (W(x))=N\). The matrix polynomial is said to be monic when \(A_N=I_p\), where \(I_p\in {\mathbb {C}}^{p\times p}\) denotes the identity matrix. The linear space—a bimodule for the ring of matrices \(\mathbb C^{p\times p}\)—of matrix polynomials with coefficients in \({\mathbb {C}}^{p\times p}\) will be denoted by \({\mathbb {C}}^{p\times p}[x]\).
Definition 2
(Eigenvalues) The spectrum, or the set of eigenvalues, \(\sigma (W(x))\) of a matrix polynomial W is the zero set of \(\det W(x)\), i.e.
Proposition 1
A monic matrix polynomial W(x), \(\deg (W(x))=N\), has Np (counting multiplicities) eigenvalues or zeros, i.e., we can write
with \(Np=\alpha _1+\cdots +\alpha _q\).
Proposition 2
Any nonsingular matrix polynomial \(W(x)\in {\mathbb {C}}^{p\times p}[x]\), \(\det W(x)\ne 0\), can be represented
at \(x=x_0\in {\mathbb {C}}\), where \(E_{x_0}(x)\) and \(F_{x_0}(x)\) are nonsingular matrices and \(\kappa _1\le \cdots \le \kappa _m\) are nonnegative integers. Moreover, \(\{\kappa _1,\ldots ,\kappa _m\}\) are uniquely determined by W(x) and they are known as partial multiplicities of W(x) at \(x_0\).
Definition 3
For an eigenvalue \(x_0\) of a monic matrix polynomial \(W(x)\in {\mathbb {C}}^{p\times p}[x]\), then:
-
(i)
A non-zero vector \(r_{0}\in {\mathbb {C}}^p\) is said to be a right eigenvector, with eigenvalue \(x_0\in \sigma (W(x))\), whenever \(W(x_0)r_{0}=0\), i.e., \(r_{0}\in {\text {Ker}} W(x_0)\ne \{0\}\).
-
(ii)
A non-zero covector \(l_{0}\in \big ({\mathbb {C}}^p\big )^*\) is said to be an left eigenvector, with eigenvalue \(x_0\in \sigma (W(x))\), whenever \(l_{0}W(x_0)=0\),\(\big (l_{0}\big )^\top \in \big ( {\text {Ker}}(W(x_0))\big )^\perp ={\text {Ker}}\big ( (W(x_0))^\top \big )\ne \{0\}\).
-
(iii)
A sequence of vectors \(\{r_{0},r_{1},\ldots , r_{m-1}\}\) is said to be a right Jordan chain of length m corresponding to the eigenvalue \(x_0\in \sigma (W(x))\), if \(r_{0}\) is an right eigenvector of \(W(x_0)\) and
$$\begin{aligned} \sum _{s=0}^{j}\frac{1}{s!} \frac{{\text {d}}^sW}{{\text {d}} x^s} \Big |_{x=x_0}r_{j-s}&=0,&j&\in \{0,\ldots ,m-1\}. \end{aligned}$$ -
(iv)
A sequence of covectors \(\{l_{0},l_{1}\ldots , l_{m-1}\}\) is said to be a left Jordan chain of length m, corresponding to \(x_0\in \sigma (W^\top )\), if \(\{(l_{0})^\top ,(l_{1})^\top ,\ldots , (l_{m-1})^\top \}\) is a right Jordan chain of length m for the matrix polynomial \(\big (W(x)\big )^\top \).
-
(v)
A right root polynomial at \(x_0\) is a non-zero vector polynomial \(r(x)\in {\mathbb {C}}^p[x]\) such that W(x)r(x) has a zero of certain order at \(x=x_0\), the order of this zero is called the order of the root polynomial. Analogously, a left root polynomial is a non-zero covector polynomial \(l(x)\in \mathbb ({\mathbb {C}}^p)^*[x]\) such that \(l(x_0)W(x_0)=0\).
-
(vi)
The maximal lengths, either of right or left Jordan chains corresponding to the eigenvalue \(x_0\), are called the multiplicity of the eigenvector \(r_{0}\) or \(l_{0}\) and are denoted by \(m(r_{0})\) or \(m(l_{0})\), respectively.
Proposition 3
Given an eigenvalue \(x_0\in \sigma (W(x))\) of a monic matrix polynomial W(x), multiplicities of right and left eigenvectors coincide and they are equal to the corresponding partial multiplicities \(\kappa _i\).
The above definition generalizes the concept of Jordan chain for degree one matrix polynomials.
Proposition 4
The Taylor expansion of a right root polynomial r(x), respectively of a left root polynomial l(x), at a given eigenvalue \(x_0\in \sigma (W(x))\) of a monic matrix polynomial W(x),
provides us with right Jordan chain
Proposition 5
Given an eigenvalue \(x_0\in \sigma (W(x))\) of a monic matrix polynomial W(x), with multiplicity \(s=\dim {\text {Ker}}W(x_0)\), we can construct s right root polynomials, respectively left root polynomials, for \(i\in \{1,\ldots ,s\}\),
where \(r_i(x)\) are right root polynomials (respectively \( l_i(x)\) are left root polynomials) with the largest order \(\kappa _i\) among all right root polynomials, whose right eigenvector does not belong to \({\mathbb {C}}\{r_{0,1},\ldots ,r_{0,i-1}\}\) (respectively left root polynomials whose left eigenvector does not belong to \(\mathbb C\{l_{0,1},\ldots ,l_{0,i-1}\}\)).
Definition 4
(Canonical Jordan chains) A canonical set of right Jordan chains (respectively left Jordan chains) of the monic matrix polynomial W(x) corresponding to the eigenvalue \(x_0\in \sigma (W(x))\) is, in terms of the right root polynomials (respectively left root polynomials) described in Proposition 5, the following sets of vectors
Proposition 6
For a monic matrix polynomial W(x) the lengths \(\{\kappa _1,\ldots ,\kappa _s\}\) of the Jordan chains in a canonical set of Jordan chains of W(x) corresponding to the eigenvalue \(x_0\), see Definition 4, are the nonzero partial multiplicities of W(x) at \(x=x_0\) described in Proposition 2.
Definition 5
(Canonical Jordan chains and root polynomials) For each eigenvalue \(x_a\in \sigma (W(x))\) of a monic matrix polynomial W(x), with multiplicity \(\alpha _a\) and \(s_a=\dim {\text {Ker}} W(x_a)\), \(a\in \{1,\ldots ,q\}\), we choose a canonical set of right Jordan chains, respectively left Jordan chains,
and, consequently, with partial multiplicities satisfying \(\sum _{j=1}^{s_a}\kappa _j^{(a)}=\alpha _a\). Thus, we can consider the following right root polynomials
Definition 6
(Canonical Jordan pairs) We also define the corresponding canonical Jordan pair \((X_a,J_a)\) with \(X_a\) the matrix
and \(J_a\) the matrix
where \(J_{a,j}\in {\mathbb {C}}^{\kappa ^{(a)}_j\times \kappa ^{(a)}_j}\) are the Jordan blocks of the eigenvalue \(x_a\in \sigma (W(x))\). Then, we say that (X, J) with
is a canonical Jordan pair for W(x).
We have the important result, see [39],
Proposition 7
The Jordan pairs of a monic matrix polynomial W(x) satisfy
A key property, see Theorem 1.20 of [39], is
Proposition 8
For any Jordan pair (X, J) of a monic matrix polynomial \(W(x)=I_px^N+A_{N-1}x^{N-1}+\cdots +A_0\) the matrix
is nonsingular.
Definition 7
(Jordan triple) Given
with \(Y_a\in {\mathbb {C}}^{\alpha _a\times p}\) , we say that (X, J, Y) is a Jordan triple whenever
Moreover, Theorem 1.23 of [39], gives the following characterization
Proposition 9
Two matrices \(X\in {\mathbb {C}}^{p\times Np}\) and \(J\in \mathbb C^{Np\times Np}\) constitute a Jordan pair of a monic matrix polynomial \(W(x)=I_px^N+A_{N-1}x^{N-1}+\cdots +A_0\) if and only if the two following properties hold
-
(i)
The matrix
$$\begin{aligned} \begin{bmatrix} X\\ XJ\\ \vdots \\ XJ^{N-1} \end{bmatrix} \end{aligned}$$is nonsingular.
-
(ii)
$$\begin{aligned} A_0X+A_1XJ+\cdots +A_{N-1} XJ^{N-1}+XJ^N&=0_{p\times Np}. \end{aligned}$$
Proposition 10
Given a monic matrix polynomial W(x) the adapted root polynomials given in Definition 5 satisfy
Here, given a function f(x) we use the following notation for its derivatives evaluated at an eigenvalue \(x_a\in \sigma (W(x))\)
In this paper we assume that the partial multiplicities are ordered in an increasing way, i.e., \(\kappa _1^{(a)}\le \kappa _2^{(a)}\le \cdots \le \kappa _{s_a}^{(a)}\).
Proposition 11
If \(r_{i}^{(a)}\) and \(l_j^{(a)}\) are right and left root polynomials corresponding to the eigenvalue \(x_a\in \sigma (W(x))\), then a polynomial
exists such that
Definition 8
(Spectral jets) Given a matrix function f(x) smooth in region \(\Omega \subset {\mathbb {C}}\) with \(x_a\in {\overline{\Omega }}\), a point in the closure of \(\Omega \) we consider its matrix spectral jets
and given a Jordan pair the root spectral jet vectors
Definition 9
We consider the following jet matrices
where \((\chi _{[N]}(x))^\top :=\begin{bmatrix} I_p,\ldots ,I_px^{N-1} \end{bmatrix}\in {\mathbb {C}}^{p\times Np}[x]\).
Lemma 1
(Root spectral jets and Jordan pairs) Given a canonical Jordan pair (X, J) for the monic matrix polynomial W(x) we have that
Thus, any polynomial \(P_n(x)=\sum _{j=0}^nP_j x^j\) has as its spectral jet vector corresponding to W(x) the following matrix
Definition 10
If \(W(x)=\sum \nolimits _{k=0}^{N}A_{k}x^k\in {\mathbb {C}}^{p\times p}[x]\) is a matrix polynomial of degree N, we introduce the matrix
Lemma 2
Given a Jordan triple (X, J, Y) for the monic matrix polynomial W(x) we have
Proof
From Lemma 1 we deduce that
which is nonsingular, see Propositions 8 and 9. The biorthogonality condition (2.6) of [39] for \({\mathcal {R}}\) and \({\mathcal {Q}}\) is
and if (X, J, Y) is a canonical Jordan triple, then
\(\square \)
Proposition 12
The matrix \({\mathcal {R}}_n:=\begin{bmatrix} Y, J Y,\ldots , J^{n-1} Y \end{bmatrix}\in {\mathbb {C}}^{Np\times np}\) has full rank.
Regarding the matrix \({\mathcal {B}}\),
Definition 11
Let us consider the bivariate matrix polynomial
where \(A_j\) are the matrix coefficients of W(x), see (1).
We consider the complete homogeneous symmetric polynomials in two variables
For example, the first four polynomials are
Proposition 13
In terms of complete homogeneous symmetric polynomials in two variables we can write
1.2 On orthogonal matrix polynomials
The polynomial ring \({\mathbb {C}}^{p\times p}[x]\) is a free bimodule over the ring of matrices \({\mathbb {C}}^{p\times p}\) with a basis given by \(\{I_p,I_p x, I_p x^2,\ldots \}\). Important free bisubmodules are the sets \({\mathbb {C}}_m^{p\times p}[x]\) of matrix polynomials of degree less than or equal to m. A basis, which has cardinality \(m+1\), for \({\mathbb {C}}_m^{p\times p}[x]\) is \(\{I_p,I_p x, \ldots , I_p x^m\}\); as \({\mathbb {C}}\) has the invariant basis number (IBN) property so does \({\mathbb {C}}^{p\times p}\), see [64]. Therefore, being \({\mathbb {C}}^{p\times p}\) an IBN ring, the rank of the free module \({\mathbb {C}}_m^{p\times p}[x]\) is unique and equal to \(m+1\), i.e. any other basis has the same cardinality. Its algebraic dual \(\big ({\mathbb {C}}_m^{p\times p}[x]\big )^*\) is the set of homomorphisms \(\phi :{\mathbb {C}}_m^{p\times p}[x]\rightarrow \mathbb C^{p\times p}\) which are, for the right module, of the form
where \(\phi _k\in {\mathbb {C}}^{p\times p}\). Thus, we can identify the dual of the right module with the corresponding left submodule. This dual is a free module with a unique rank, equal to \(m+1\), and a dual basis \(\{(I_p x^k)^*\}_{k=0}^m\) given by
We have similar statements for the left module \({\mathbb {C}}_m^{p\times p}[x]\), being its dual a right module
Definition 12
(Sesquilinear form) A sesquilinear form \(\left\langle {\cdot ,\cdot }\right\rangle \) on the bimodule \({\mathbb {C}}^{p\times p}[x]\) is a continuous map
such that for any triple \(P(x),Q(x),R(x)\in {\mathbb {C}}^{p\times p}[x]\) the following properties are fulfilled
-
(i)
\(\left\langle {AP(x)+BQ(x),R(y)}\right\rangle =A\left\langle {P(x),R(y)}\right\rangle +B\left\langle {Q(x),R(y)}\right\rangle \), \(\forall A,B\in {\mathbb {C}}^{p\times p}\),
-
(ii)
\(\left\langle {P(x),AQ(y)+BR(y)}\right\rangle =\left\langle {P(x),Q(y)}\right\rangle A^\top +\left\langle {P(x),R(y)}\right\rangle B^\top \), \(\forall A,B\in {\mathbb {C}}^{p\times p}\).
The reader probably has noticed that, despite dealing with complex polynomials in a real variable, we have followed [26] and chosen the transpose instead of the Hermitian conjugated. For any couple of matrix polynomials \(P(x)=\sum \nolimits _{k=0}^{\deg P}p_kx^k\) and \(Q(x)=\sum \nolimits _{l=0}^{\deg Q} q_lx^l\) the sesquilinear form is defined by
where the coefficients are the values of the sesquilinear form on the basis of the module
The corresponding semi-infinite matrix
is the named as the Gram matrix of the sesquilinear form.
1.2.1 Hankel sesquilinear forms
Now, we present a family of examples of sesquilinear forms in \({\mathbb {C}}^{p\times p}[x]\) that we call Hankel sesquilinear forms. A first example is given by matrices with complex (or real) Borel measures in \({\mathbb {R}}\) as entries
i.e., a \(p\times p\) matrix of Borel measures supported in \({\mathbb {R}}\). Given any pair of matrix polynomials \(P(x),Q(x)\in {\mathbb {C}}^{p\times p}[x]\) we introduce the following sesquilinear form
A more general sesquilinear form can be constructed in terms of generalized functions (or continuous linear functionals). In [53, 54] a linear functional setting for orthogonal polynomials is given. We consider the space of polynomials \({\mathbb {C}}[x]\), with an appropriate topology, as the space of fundamental functions, in the sense of [27, 28], and take the space of generalized functions as the corresponding continuous linear functionals. It is remarkable that the topological dual space coincides with the algebraic dual space. On the other hand, this space of generalized functions is the space of formal series with complex coefficients \(({\mathbb {C}}[x])'={\mathbb {C}}[\![x]\!]\).
In this article we use generalized functions with a well defined support and, consequently, the previously described setting requires of a suitable modification. Following [27, 28, 67], let us recall that the space of distributions is a space of generalized functions when the space of fundamental functions is constituted by the complex valued smooth functions of compact support \(\mathcal D:=C_0^\infty ({\mathbb {R}})\), the so called space of test functions. In this context, the set of zeros of a distribution \(u\in \mathcal D'\)is the region \(\Omega \subset {\mathbb {R}}\) if for any fundamental function f(x) with support in \(\Omega \) we have \(\langle u, f\rangle =0\). Its complement, a closed set, is what is called support, \({\text {supp}} u\), of the distribution u. Distributions of compact support, \(u\in {\mathcal {E}}'\), are generalized functions for which the space of fundamental functions is the topological space of complex valued smooth functions \({\mathcal {E}}=C^\infty ({\mathbb {R}})\). As \({\mathbb {C}}[x]\subsetneq {\mathcal {E}}\) we also know that \({\mathcal {E}}'\subsetneq (\mathbb C[x])'\cap {\mathcal {D}}'\). The set of distributions of compact support is a first example of an appropriate framework for the consideration of polynomials and supports simultaneously. More general settings appear within the space of tempered distributions \({\mathcal {S}}'\), \({\mathcal {S}}'\subsetneq {\mathcal {D}}'\). The space of fundamental functions is given by the Schwartz space \({\mathcal {S}}\) of complex valued fast decreasing functions, see [27, 28, 67]. We consider the space of fundamental functions constituted by smooth functions of slow growth \({\mathcal {O}}_M\subset {\mathcal {E}}\), whose elements are smooth functions with derivatives bounded by polynomials. As \({\mathbb {C}} [x],{\mathcal {S}}\subsetneq {\mathcal {O}}_M\), for the corresponding set of generalized functions we find that \(\mathcal O_M'\subset ({\mathbb {C}}[x])'\cap {\mathcal {S}}'\). Therefore, these distributions give a second appropriate framework. Finally, for a third suitable framework, including the two previous ones, we need to introduce bounded distributions. Let us consider as space of fundamental functions, the linear space \({\mathcal {B}}\) of bounded smooth functions, i.e., with all its derivatives in \(L^\infty ({\mathbb {R}})\), being the corresponding space of generalized functions \({\mathcal {B}}'\) the bounded distributions. From \({\mathcal {D}}\subsetneq {\mathcal {B}}\) we conclude that bounded distributions are distributions \(\mathcal B'\subsetneq {\mathcal {D}}'\). Then, we consider the space of fast decreasing distributions \({\mathcal {O}}_c'\) given by those distributions \(u\in {\mathcal {D}}'\) such that for each positive integer k, we have \(\big (\sqrt{1+x^2}\big )^ku\in {\mathcal {B}}'\) is a bounded distribution. Any polynomial \(P(x)\in {\mathbb {C}}[x]\), with \(\deg P=k\), can be written as \(P(x)=\Big (\sqrt{1+x^2}\Big )^k F(x)\) and \(F(x)=\frac{P(x)}{\big (\sqrt{1+x^2)}\big )^k}\in {\mathcal {B}}\). Therefore, given a fast decreasing distribution \(u\in {\mathcal {O}}_c'\) we may consider
which makes sense as \(\big (\sqrt{1+x^2}\big )^ku\in {\mathcal {B}}', F(x)\in {\mathcal {B}}\). Thus, \({\mathcal {O}}'_c\subset (\mathbb C[x])'\cap {\mathcal {D}}'\). Moreover it can be proven that \(\mathcal O_M'\subsetneq {\mathcal {O}}_c'\), see [53]. Summarizing this discussion, we have found three generalized function spaces suitable for the discussion of polynomials and supports simultaneously: \( {\mathcal {E}}'\subset {\mathcal {O}}_M'\subset {\mathcal {O}}_c' \subset \big (({\mathbb {C}}[x])'\cap {\mathcal {D}}'\big )\).
The linear functionals could have discrete and, as the corresponding Gram matrix is required to be quasidefinite, infinite support. Then, we are faced with discrete orthogonal polynomials, see for example [57]. Two classical examples are those of Charlier and the Meixner. For \(\mu >0\) we have the Charlier (or Poisson–Charlier) linear functional
and \(\beta >0\) and \(0<c<1\), the Meixner linear functional is
See [4] for matrix extensions of these discrete linear functionals and corresponding matrix orthogonal polynomials.
Definition 13
(Hankel sesquilinear forms) Given a matrix of generalized functions as entries
i.e., \(u_{i,j}\in ({\mathbb {C}}[x])'\), then the associated sesquilinear form \(\left\langle {P(x),Q(x)}\right\rangle _u\) is given by
When \(u_{k,l}\in {\mathcal {O}}_c'\), we write \(u\in \big (\mathcal O_c'\big )^{p\times p}\) and say that we have a matrix of fast decreasing distributions. In this case the support is defined as \({\text {supp}} (u):=\cup _{k,l=1}^N{\text {supp}}(u_{k,l})\).
Observe that in this Hankel case, we could also have continuous and discrete orthogonality.
Proposition 14
In terms of the moments
the Gram matrix of the sesquilinear form given in Definition 13 is the following moment matrix
of Hankel type.
1.2.2 Matrices of generalized kernels and sesquilinear forms
The previous examples all have in common the same Hankel block symmetry for the corresponding matrices. However, there are sesquilinear forms which do not have this particular Hankel type symmetry. Let us stop for a moment at this point, and elaborate on bilinear and sesquilinear forms for polynomials. We first recall some facts regarding the scalar case with \(p=1\), and bilinear forms instead of sesquilinear forms. Given \(u_{x,y}\in (\mathbb C[x,y])'=({\mathbb {C}}[x,y])^*\cong {\mathbb {C}}[\![x,y]\!]\), we can consider the continuous bilinear form \(B(P(x),Q(y))=\langle u_{x,y}, P(x)\otimes Q(y)\rangle \). This gives a continuous linear map \({\mathcal {L}}_u: {\mathbb {C}}[y]\rightarrow ({\mathbb {C}} [x])'\) such that \(B(P(x),Q(y)))=\langle {\mathcal {L}}_u(Q(y)), P(x)\rangle \). The Gram matrix of this bilinear form has coefficients \(G_{k,l}=B(x^k,y^l)=\langle u_{x,y}, x^k\otimes y^{l}\rangle =\langle {\mathcal {L}}_u(y^l),x^k\rangle \). Here we follow Schwartz discussion on kernels and distributions [66], see also [45]. A kernel u(x, y) is a complex valued locally integrable function, that defines an integral operator \(f(x)\mapsto g(x)=\int u(x,y) f(y){\text {d}}y \). Following [67] we denote \(({\mathcal {D}})_x\) and \(({\mathcal {D}}')_x\) the test functions and the corresponding distributions in the variable x, and similarly for the variable y. We extend this construction considering a bivariate distribution in the variables x, y, \(u_{x,y}\in (\mathcal D')_{x,y}\), that Schwartz called noyau-distribution, and as we use a wider range of generalized functions we will call generalized kernel. This \(u_{x,y}\) generates a continuous bilinear form \( B_u\big (\phi (x),\psi (y)\big ) =\langle u_{x,y}, \phi (x)\otimes \psi (y)\rangle \). It also generates a continuous linear map \(\mathcal L_u: ({\mathcal {D}})_y\rightarrow ({\mathcal {D}}')_x\) with \(\langle ({\mathcal {L}}_u (\psi (y)))_x,\phi (x)\rangle =\langle u_{x,y}, \phi (x)\otimes \psi (y)\rangle \). The Schwartz kernel theorem states that every generalized kernel \(u_{x,y}\) defines a continuous linear transformation \({\mathcal {L}}_u\) from \(({\mathcal {D}})_y\) to \((\mathcal D')_x\), and to each of such continuous linear transformations we can associate one and only one generalized kernel. According to the prolongation scheme developed in [66], the generalized kernel \(u_{x,y}\) is such that \({\mathcal {L}}_u:(\mathcal E)_y\rightarrow ({\mathcal {E}}')_x\) if and only if the support of \(u_{x,y}\) in \({\mathbb {R}}^2\) is compact.Footnote 1
We can extended these ideas to the matrix scenario of this paper, where instead of bilinear forms we have sesquilinear forms.
Definition 14
Given a matrix of generalized kernels
with \((u_{x,y})_{k,l}\in ({\mathbb {C}}[x,y])'\) or, if a notion of support is required, \((u_{x,y})_{k,l}\in (\mathcal E')_{x,y},({\mathcal {O}}_M')_{x,y},({\mathcal {O}}_c')_{x,y}\), provides a continuous sesquilinear form with entries given by
where \({\mathcal {L}}_{u_{k,l}}:{\mathbb {C}}[y]\rightarrow ({\mathbb {C}}[x])'\)—or depending on the setting \({\mathcal {L}}_{u_{k,l}}:(\mathcal E)_y\rightarrow ({\mathcal {E}}')_x\), \({\mathcal {L}}_{u_{k,l}}:(\mathcal O_M)_y\rightarrow ({\mathcal {O}}'_c)_x\), for example—is a continuous linear operator. We can condensate it in a matrix form, for \(u_{x,y}\in ({\mathbb {C}}^{p\times p}[x,y])'=({\mathbb {C}}^{p\times p}[x,y])^*\cong {\mathbb {C}}^{p \times p}[\![x,y]\!]\), a sesquilinear form is given
with \({\mathcal {L}}_u: {\mathbb {C}}^{p\times p}[y]\rightarrow ({\mathbb {C}}^{p\times p}[x])'\) a continuous linear map. Or, in other scenarios \(\mathcal L_u:( ({\mathcal {E}})_y)^{p\times p}\rightarrow (({\mathcal {E}}')_x)^{p\times p}\) or \({\mathcal {L}}_u: (( {\mathcal {O}}_M)_y)^{p\times p}\rightarrow ((\mathcal O_c')_x)^{p\times p}\).
If, instead of a matrix of bivariate distributions, we have a matrix of bivariate measures then we could write for the sesquilinear form \(\langle P(x),Q(y)\rangle =\iint P(x) {\text {d}}\mu (x,y) (Q(y))^\top \), where \(\mu (x,y)\) is a matrix of bivariate measures.
For the scalar case \(p=1\), Adler and van Moerbeke discussed in [1] different possibilities of non-Hankel Gram matrices. Their Gram matrix has as coefficients \(G_{k,l}=\langle u_l,x^k\rangle \), for a infinite sequence of generalized functions \(u_l\), that recovers the Hankel scenario for \(u_l=x^lu\). They studied in more detail the following cases
-
(i)
Banded case: \(u_{l+km}=x^{km} u_l\).
-
(ii)
Concatenated solitons: \(u_l(x)=\delta (x-p_{l+1})-(\lambda _{l+1})^2\delta (x-q_{k+1})\).
-
(iii)
Nested Calogero–Moser systems: \(u_l(x)= \delta '(x-p_{l+1})+\lambda _{l+1}\delta (x-p_{l+1})\).
-
(iv)
Discrete KdV soliton type: \(u_l(x)=(-1)^k \delta ^{(l)}(x-p)-\delta ^{(l)}(x+p)\).
We see that the three last weights are generalized functions. To compare with the Schwartz’s approach we observe that \(\langle u_{x,y}, x^k\otimes y^l \rangle =\langle u_l, x^k \rangle \) and, consequently, we deduce \(u_l={\mathcal {L}}_u(y^l)\) (and for continuous kernels \(u_l(x)=\int u(x,y)y^l{\text {d}}y)\). The first case, has a banded structure and its Gram matrix fulfills \(\Lambda ^m G=G(\Lambda ^\top )^m\). In [2] different examples are discussed for the matrix orthogonal polynomials, like bigraded Hankel matrices \(\Lambda ^nG=G\big (\Lambda ^\top \big )^m\), where n, m are positive integers, can be realized as \(G_{k,l}=\langle u_l, I_px^k\rangle \), in terms of matrices of linear functionals \(u_l\) which satisfy the following periodicity condition \(u_{l+m}=u_l x^{n}\). Therefore, given the linear functionals \(u_0,\dots ,u_{m-1}\) we can recover all the others.
1.2.3 Sesquilinear forms supported by the diagonal and Sobolev sesquilinear forms
First we consider the scalar case
Definition 15
A generalized kernel \(u_{x,y}\) is supported by the diagonal \(y=x\) if
for a locally finite sum and generalized functions \(u^{(n,m)}_x\in ({\mathcal {D}}')_x\).
Proposition 15
(Sobolev bilinear forms) The bilinear form corresponding to a generalized kernel supported by the diagonal is \(B(\phi (x),\psi (x))=\sum _{n,m}\left\langle {u^{(n,m)}_x,\phi ^{(n)}(x)\psi ^{(m)}(x)}\right\rangle \), which is of Sobolev type,
For order zero \(u^{(n,m)}_x\) generalized functions, i.e. for a set of Borel measures \(\mu ^{(n,m)}\), we have
which is of Sobolev type. Thus, in the scalar case, generalized kernels supported by the diagonal are just Sobolev bilinear forms. The extension of these ideas to the matrix case is immediate, we only need to require to all generalized kernels to be supported by the diagonal.
Proposition 16
(Sobolev sesquilinear forms) A matrix of generalized kernels supported by the diagonal provides Sobolev sesquilinear forms
for a locally finite sum, in the of derivatives order n, m, and of generalized functions \(u^{(n,m)}_x\in ({\mathbb {C}}[x] )'\). All Sobolev sesquilinear forms are obtained in this form.
For a recent review on scalar Sobolev orthogonal polynomials see [51]. Observe that with this general framework we could consider matrix discrete Sobolev orthogonal polynomials, that will appear whenever the linear functionals \(u^{(m,n)}\) have infinite discrete support, as far as u is quasidefinite.
1.2.4 Biorthogonality, quasidefiniteness and Gauss–Borel factorization
Definition 16
(Biorthogonal matrix polynomials) Given a sesquilinear form \(\left\langle {\cdot ,\cdot }\right\rangle \), two sequences of matrix polynomials \(\big \{P_n^{[1]}(x)\big \}_{n=0}^\infty \) and \(\big \{P_n^{[2]}(x)\big \}_{n=0}^\infty \) are said to be biorthogonal with respect to \(\left\langle {\cdot ,\cdot }\right\rangle \) if
-
(i)
\(\deg (P_n^{[1]}(x))=\deg (P_n^{[2]}(x))=n\) for all \(n\in \{0,1,\dots \}\),
-
(ii)
\(\left\langle {P_n^{[1]}(x),P_m^{[2]}(y)}\right\rangle =\delta _{n,m}H_n\) for all \(n,m\in \{0,1,\dots \}\),
where \(H_n\) are nonsingular matrices and \(\delta _{n,m}\) is the Kronecker delta.
Definition 17
(Quasidefiniteness) A Gram matrix of a sesquilinear form \(\langle \cdot ,\cdot \rangle _u\) is said to be quasidefinite whenever \(\det G_{[k]}\ne 0\), \(k\in \{0,1,\dots \}\). Here \(G_{[k]}\) denotes the truncation
We say that the bivariate generalized function \(u_{x,y}\) is quasidefinite and the corresponding sesquilinear form is nondegenerate whenever its Gram matrix is quasidefinite.
Proposition 17
(Gauss–Borel factorization, see [7]) If the Gram matrix of a sesquilinear form \(\langle \cdot ,\cdot \rangle _u\) is quasidefinite, then there exists a unique Gauss–Borel factorization given by
where \(S_1,S_2\) are lower unitriangular block matrices and H is a diagonal block matrix
with \((S_i)_{n,m}\) and \(H_n\in {\mathbb {C}}^{p\times p}\), \(\forall n,m\in \{0,1,\dots \}\).
For \(l\ge k\) we will also use the following bordered truncated Gram matrix
where we have replaced the last row of blocks of the truncated Gram matrix \(G_{[k]}\) by the row of blocks . We also need a similar matrix but replacing the last block column of \(G_{[k]}\) by a column of blocks as indicated
Using last quasideterminants, see [29, 58] and “Appendix”, we find
Proposition 18
If the last quasideterminants of the truncated moment matrices are nonsingular, i.e.,
then, the Gauss–Borel factorization can be performed and the following expressions are fulfilled
and for the inverse elements [58] the formulas
hold true.
We see that the matrices \(H_k\) are quasideterminants, and following [7, 8] we refer to them as quasitau matrices.
1.2.5 Biorthogonal polynomials, second kind functions and Christoffel–Darboux kernels
Definition 18
We define \(\chi (x):=[I_p,I_p x,I_px^2,\dots ]^\top \), and for \(x\ne 0\), \(\chi ^*(x):=[I_px^{-1},I_p x^{-2},I_px^{-3},\dots ]^\top \).
Remark 1
Observe that the Gram matrix can be expressed as
and its block entries are
If the sesquilinear form derives from a matrix of bivariate measures \(\mu (x,y)=[\mu _{i.j}(x,y)]\) we have for the Gram matrix blocks
which reduces for absolutely continuous measures with respect the Lebesgue measure \({\text {d}}x{\text {d}}y\) to a matrix of weights \(w(x,y)=[w_{i,j}(x,y)]\), and When the matrix of generalized kernels is Hankel we recover the classical Hankel structure, and the Gram matrix is a moment matrix. For example, for a matrix of measures we will have \(G_{k,l}=\int x^{k+l}{\text {d}}\mu (x )\).
Definition 19
Given a quasidefinite matrix of generalized kernels \(u_{x,y}\) and the Gauss–Borel factorization (17) of its Gram matrix, the corresponding first and second families of matrix polynomials are
respectively.
Proposition 19
(Biorthogonality) Given a quasidefinite matrix of generalized kernels \(u_{x,y}\), the first and second families of monic matrix polynomials \(\big \{P_n^{[1]}(x)\big \}_{n=0}^\infty \) and \(\big \{P_n^{[2]}(x)\big \}_{n=0}^\infty \) are biorthogonal
Remark 2
The biorthogonal relations yield the orthogonality relations
Remark 3
(Symmetric generalized kernels) If \(u_{x,y}=(u_{y,x})^\top \), the Gram matrix is symmetric \(G=G^\top \) and we are dealing with a Cholesky block factorization with \(S_1=S_2\) and \(H=H^\top \). Now \(P^{[1]}_n(x)=P^{[2]}_n(x)=:P_n(x)\), and \(\{P_n(x)\}_{n=0}^\infty \) is a set of monic orthogonal matrix polynomials. In this case \(C_n^{[1]}(x)=C_n^{[2]}(x)=:C_n(x)\).
The shift matrix is the following semi-infinite block matrix
which satisfies the spectral property
Proposition 20
The symmetry of the block Hankel moment matrix reads \(\Lambda G=G\Lambda ^\top \).
Notice that this symmetry completely characterizes Hankel block matrices.
Definition 20
The matrices \( J_1:=S_1 \Lambda (S_1)^{-1}\) and \(J_2:=S_2 \Lambda (S_2)^{-1}\) are the Jacobi matrices associated with the Gram matrix G.
The reader must notice the abuse in the notation. But for the sake of simplicity we have used the same letter for Jacobi and Jordan matrices. The type of matrix will be clear from the context.
Proposition 21
The biorthogonal polynomials are eigenvectors of the Jacobi matrices
and the second kind functions ála Gram satisfy
Proposition 22
For Hankel type Gram matrices (i.e., associated with a matrix of univariate generalized functionals) the two Jacobi matrices are related by \( H^{-1}J_1=J_2^{\top } H^{-1}\), being, therefore, a tridiagonal matrix. This yields the three term relation for biorthogonal polynomials and second kind functions, respectively.
Proposition 23
We have the following last quasideterminantal expressions
Definition 21
(Christoffel–Darboux kernel, [15, 68]) Given two sequences of matrix biorthogonal polynomials \(\big \{P_k^{[1]}(x)\big \}_{k=0}^\infty \) and \(\big \{P_k^{[2]}(y)\big \}_{k=0}^\infty \), with respect to the sesquilinear form \(\left\langle {\cdot ,\cdot }\right\rangle _u\), we define the n-th Christoffel–Darboux kernel matrix polynomial
and the mixed Christoffel–Darboux kernel
Proposition 24
-
(i)
For a quasidefinite matrix of generalized kernels \(u_{x,y}\), the corresponding Christoffel–Darboux kernel gives the projection operator
$$\begin{aligned}&\left\langle { K_n(x,z),\sum _{0\le j\ll \infty } C_j P^{[2]}_j(y)}\right\rangle _u= \left( \sum _{j=0}^nC_jP_j^{[2]}(z)\right) ^\top ,\nonumber \\&\left\langle { \sum _{0\le j\ll \infty }C_jP^{[1]}_j(x),(K_n(z,y))^\top }\right\rangle _u= \sum _{j=0}^nC_jP^{[1]}_j(z). \end{aligned}$$(13) -
(ii)
In particular, we have
$$\begin{aligned} \left\langle { K_n(x,z),I_py^l}\right\rangle _u&=I_pz^l,&l\in&\{0,1,\dots ,n\}. \end{aligned}$$(14)
Proposition 25
(Christoffel–Darboux formula) When the sesquilinear form is Hankel (now u is a matrix of univariate generalized functions with its Gram matrix of block Hankel type) the Christoffel–Darboux kernel satisfies
and the mixed Christoffel–Darboux kernel fulfills
Proof
We only prove the second formula, for the first one proceeds similarly. It is obviously a consequence of the three term relation. Firstly, let us notice that
Secondly, we have
Using this, we calculate the \( \big (P_{[n]}^{[2]}(y) \big )^{\top } \left[ J_{2}^{\top }H^{-1}\right] _{[n]} C^{[1]}_{[n]}(x)\), first by computing the action of middle matrix on its left and then on its right to get
and since \(P_0=I_p\) the Proposition is proven. \(\square \)
Next, we deal with the fact that our definition of second kind functions implies non admissible products and do involve series.
Definition 22
For the support of the matrix of generalized kernels \({\text {supp}}( u_{x,y})\subset {\mathbb {C}}^2\) we consider the action of the component projections \(\pi _1,\pi _2:\mathbb C^2\rightarrow {\mathbb {C}}\) on its first and second variables, \((x,y)\overset{\pi _1}{\mapsto }x\), \((x,y)\overset{\pi _2}{\mapsto }y\), respectively, and introduce the projected supports \({\text {supp}}_x(u):=\pi _1\big ({\text {supp}} (u_{x,y})\big ) \) and \({\text {supp}}_y(u):=\pi _2\big ({\text {supp}} (u_{x,y})\big )\), both subsets of \( {\mathbb {C}}\). We will assume that \(r_x:=\sup \{|z|: z\in {\text {supp}}_xu\})<\infty \) and \(r_y:=\sup \{|z|: z\in {\text {supp}}_yu\})<\infty \) We also consider the disks about infinity, or annulus around the origin, \(D_x:=\{z\in {\mathbb {C}}: |z|> r_x\}\) and \(D_y:=\{z\in {\mathbb {C}}: |z|> r_y\}\).
Definition 23
(Second kind functions á la Cauchy) For a generalized kernels is such that \(u_{x,y}\in \big ((\mathcal O_c')_{x,y}\big )^{p\times p}\) we define two families of second kind functions á la Cauchy given by
2 Matrix Geronimus transformations
Geronimus transformations for scalar orthogonal polynomials were first discussed in [35], where some determinantal formulas were found, see [55, 73]. Geronimus perturbations of degree two of scalar bilinear forms have been very recently treated in [16] and in the general case in [17]. Here we discuss its matrix extension for general sesquilinear forms.
Definition 24
Given a matrix of generalized kernels \(u_{x,y}=((u_{x,y})_{i,j})\in \big ((\mathcal O_c')_{x,y}\big )^{p\times p}\) with a given support \({\text {supp}} u_{x,y}\), and a matrix polynomial \(W(y)\in {\mathbb {C}}^{p\times p}[y]\) of degree N, such that \( \sigma (W(y))\cap {\text {supp}}_y(u)=\varnothing \), a matrix of bivariate generalized functions \({\check{u}}_{x,y}\) is said to be a matrix Geronimus transformation of the matrix of generalized kernels \(u_{x,y}\) if
Proposition 26
In terms of sesquilinear forms a Geronimus transformation fulfills
while, in terms of the corresponding Gram matrices, satisfies
We will assume that the perturbed moment matrix has a Gauss–Borel factorization \({\check{G}}={\check{S}}_1^{-1} {\check{H}} (\check{S}_2)^{-\top }\), where \({\check{S}}_1,{\check{S}}_2\) are lower unitriangular block matrices and \({\check{H}}\) is a diagonal block matrix
Hence, the Geronimus transformation provides the family of matrix biorthogonal polynomials
with respect to the perturbed sesquilinear form \(\left\langle {\cdot ,\cdot }\right\rangle _{{\check{u}}}\).
Observe that the matrix generalized kernels \(v_{x,y}\) such that \(v_{x,y}W(y)=0_p\), can be added to a Geronimus transformed matrix of generalized kernels \({\check{u}}_{x,y}\mapsto {\check{u}}_{x,y}+v_{x,y}\), to get a new Geronimus transformed matrix of generalized kernels. We call masses these type of terms.
2.1 The resolvent and connection formulas
Definition 25
The resolvent matrix is
The key role of this resolvent matrix is determined by the following properties
Proposition 27
-
(i)
The resolvent matrix can be also expressed as
$$\begin{aligned} \omega = {\check{H}} \big ({\check{S}}_2\big )^{-\top } W(\Lambda ^\top )\big ( S_2\big )^{\top } H^{-1}, \end{aligned}$$(17)where the products in the RHS are associative.
-
(ii)
The resolvent matrix is a lower unitriangular block banded matrix —with only the first N block subdiagonals possibly not zero, i.e.,
$$\begin{aligned} \omega =\begin{bmatrix} I_p&0_p&\dots&0_p&0_p&\dots \\ \omega _{1,0}&I_p&\ddots&0_p&0_p&\ddots \\ \vdots&\ddots&\ddots&\ddots&\ddots \\ \omega _{N,0}&\omega _{N,1}&\dots&I_p&0_p&\ddots \\ 0_p&\omega _{N+1,1}&\cdots&\omega _{N+1,N}&I_p&\ddots \\ \vdots&\ddots&\ddots&\ddots&\ddots \end{bmatrix}. \end{aligned}$$ -
(iii)
The following connection formulas are satisfied
$$\begin{aligned} {\check{P}}^{[1]}(x)&=\omega P^{[1]}(x), \end{aligned}$$(18)$$\begin{aligned} \big ({\check{H}}^{-1}\omega H\big )^\top {\check{P}}^{[2]} (y)&=P^{[2]}(y)W^\top (y). \end{aligned}$$(19) -
(iv)
For the last subdiagonal of the resolvent we have
$$\begin{aligned} \omega _{N+k,k}={\check{H}}_{N+k}A_N(H_k)^{-1}. \end{aligned}$$(20)
Proof
-
(i)
From Proposition 26 and the Gauss–Borel factorization of G and \({\check{G}}\) we get
$$\begin{aligned} \big ( S_1\big )^{-1} H \big ( S_2\big )^{-\top }=\Big (\big (\check{S}_1\big )^{-1} {\check{H}} \big ({\check{S}}_2\big )^{-\top }\Big ) W(\Lambda ^\top ), \end{aligned}$$so that
$$\begin{aligned} {\check{S}}_1\big ( S_1\big )^{-1} H = {\check{H}} \big (\check{S}_2\big )^{-\top } W(\Lambda ^\top )\big ( S_2\big )^{\top }. \end{aligned}$$ -
(ii)
The resolvent matrix, being a product of lower unitriangular matrices, is a lower unitriangular matrix. However, from (17) we deduce that is a matrix with all its subdiagonals with zero coefficients but for the first N. Thus, it must have the described band structure.
-
(iii)
From the definition we have (18). Let us notice that (17) can be written as
$$\begin{aligned} \omega ^\top {\check{H}} ^{-\top }= H^{-\top }S_2W^\top (\Lambda )\big ({\check{S}}_2\big )^{-1}, \end{aligned}$$so that
$$\begin{aligned} \omega ^\top {\check{H}} ^{-\top }{\check{P}}^{[2]}(y)= H^{-\top }S_2W^\top (\Lambda )\chi (y), \end{aligned}$$and (19) follows.
-
(iv)
It is a consequence of (17).
\(\square \)
The connection formulas (18) and (19) can be written as
Lemma 3
We have that
with \({\mathcal {B}}\) given in Definition 10.
Proposition 28
The Geronimus transformation of the second kind functions satisfies
Proof
To get (24) we argue as follows
But, we have
so that
and using the Gauss–Borel factorization the result follows. For (25) we have
\(\square \)
Observe that the corresponding entries are
2.2 Geronimus transformations and Christoffel–Darboux kernels
Definition 26
The resolvent wing is the matrix
Theorem 1
For \(m=\min (n,N)\), the perturbed and original Christoffel–Darboux kernels are related by the following connection formula
For \(n\ge N\), the connection formula for the mixed Christoffel–Darboux kernels is
where \({\mathcal {V}}(x,y)\) was introduced in Definition 11.
Proof
For the first connection formula (27) we consider the pairing
and compute it in two different ways. From (21) we get
and, therefore, \({\mathcal {K}}_{n-1}(x,y)={\check{K}}_{n-1}(x,y)\). Relation (22) leads to
and (27) is proven.
To derive (28) we consider the pairing
which, as before, can be computed in two different forms. On the one hand, using (24) we get
where \(\big ({\check{H}}\big ({\check{S}}_2\big )^{-\top }\big )_{[n,N]} \) is the truncation to the n first block rows and first N block columns of \({\check{H}}\big ({\check{S}}_2\big )^{-\top }\). This simplifies for \(n\ge N\) to
On the other hand, from (22) we conclude
and, consequently, we obtain
\(\square \)
2.3 Spectral jets and relations for the perturbed polynomials and its second kind functions
For the time being we will assume that the perturbing polynomial is monic, \(W(x)=I_p x^N+\sum \nolimits _{k=0}^{N-1}A_{k}x^k\in \mathbb C^{p\times p}[x]\).
Definition 27
Given a perturbing monic matrix polynomial W(y) the most general mass term will have the form
expressed in terms of derivatives of Dirac linear functionals and adapted left root polynomials \(l_{j}^{(a)}(x)\) of W(x), and for vectors of generalized functions \(\big (\xi ^{[a]}_{j,m}\big )_x\in \big (( {\mathbb {C}}[x])'\big )^p\) . Discrete Hankel masses appear when these terms are supported by the diagonal with
with \(\xi ^{[a]}_{j,m}\in {\mathbb {C}}^p\).
Remark 4
Observe that the Hankel masses (30) are particular cases of (29) with
so that, with the particular choice in (29)
we get the diagonal case.
Remark 5
For the sesquilinear forms we have
Observe that the distribution \(v_{x,y}\) is associated with the eigenvalues and left root vectors of the perturbing polynomial W(x). Needless to say that, when W(x) has a singular leading coefficient, this spectral part could even disappear, for example if W(x) is unimodular; i.e., with constant determinant, not depending on x. Notice that, in general, we have \(Np\ge \sum _{a=1}^q\sum _{i=1}^{s_a}\kappa ^{(a)}_j\) and we can not ensure the equality, up to for the nonsingular leading coefficient case.
Definition 28
Given a set of generalized functions \((\xi ^{[a]}_{i,m})_x\), we introduce the matrices
Definition 29
The exchange matrix is
Definition 30
The left Jordan chain matrix is given by
For \(z\ne x_a\), we also introduce the \(p\times p\) matrices
where \(i=1,\dots ,s_a\).
Remark 6
Assume that the mass matrix is as in (30). Then, in terms of
we can write
Consequently,
Observe that \({\mathcal {X}}^{(a)}_i{\mathcal {L}}_i^{(a)}\in \mathbb C^{p\kappa ^{(a)}_i\times p\kappa ^{(a)}_i}\) is a block upper triangular matrix, with blocks in \({\mathbb {C}}^{p\times p}\).
Proposition 29
For \( z\not \in {\text {supp}}_y(\check{u})={\text {supp}}_y(u)\cup \sigma (W(y))\), the following expression
holds.
Proof
We have
Now, taking into account that
we deduce the result. \(\square \)
Lemma 4
Let \(r^{(a)}_j(x)\) be right root polynomials of the monic matrix polynomial W(x) given in (2), then
Proof
Notice that we can write
\(\square \)
Lemma 5
The function \(\check{\mathcal C}^{(a)}_{n;i}(x)W(x)r^{(b)}_j(x)\in {\mathbb {C}}^p[x]\) satisfies
where the \({\mathbb {C}}^p\)-valued function \(T^{(a,b)}(x)\) is analytic at \(x=x_b\) and, in particular, \(T^{(a,a)}(x) \in {\mathbb {C}}^p[x]\) .
Proof
First, for the function \(\check{\mathcal C}^{(a)}_{n;i}(x)W(x)r^{(b)}_j(x)\in {\mathbb {C}}^p[x]\), with \(a\ne b\),
we have
where the \({\mathbb {C}}^p\)-valued function \(T^{(a,b)}(x)\) is analytic at \(x=x_b\). Secondly, from (31) and Lemma 4 we deduce that
for some \(T^{(a,a)}(x)\in {\mathbb {C}}^p[x]\). Therefore, from Proposition 11 we get
and the result follows. \(\square \)
We evaluate now the spectral jets of the second kind functions \({\check{C}}^{[1]}(z)\) á la Cauchy, thus we must take limits of derivatives precisely in points of the spectrum of W(x), which do not lay in the region of definition but on the border of it. Notice that these operations are not available for the second kind functions á la Gram.
Lemma 6
For \(m=0,\dots ,\kappa ^{(a)}_j-1\), the following relations hold
Proof
For \(z\not \in {\text {supp}}_y(u)\cup \sigma (W(y))\), a consequence of Proposition 29 is that
But, as \(\sigma (W(y))\cap {\text {supp}}_y(u)=\varnothing \), the derivatives of the Cauchy kernel \(1/(z-y)\) are analytic functions at \(z=x_a\). Therefore,
for \(m=0,\dots ,\kappa ^{(a)}_j-1\). Equation (34) shows that \( \check{{\mathcal {C}}}_{n;i}^{(b)}(x)W(x)r^{(a)}_j(x)\) for \(b\ne a\) has a zero at \(z=x_a\) of order \(\kappa ^{(a)}_j\) and, consequently,
for \(m=0,\dots ,\kappa ^{(a)}_j-1\). \(\square \)
Definition 31
Given the functions \(w^{(a)}_{i,j;k}\) introduced in Proposition 11, let us introduce the matrix \({\mathcal {W}}_{j,i}^{(a)}\in {\mathbb {C}}^{\kappa ^{(a)}_{j}\times \kappa ^{(a)}_{i}}\)
and the matrix \({\mathcal {W}}^{(a)}_{j}\in \mathbb C^{\kappa _j^{(a)}\times \alpha _a}\) given by
We also consider the matrices \({\mathcal {W}}^{(a)}\in \mathbb C^{\alpha _a\times \alpha _a}\) and \({\mathcal {W}}\in {\mathbb {C}}^{Np\times Np}\)
Proposition 30
The following relations among the spectral jets, introduced in Definition 8, of the perturbed polynomials and second kind functions
are satisfied.
Proof
Equation (37) is a direct consequence of (35). According to (34) for \(m=0,\dots , \kappa _j^{(a)}-1\), we have
and collecting all these equations in a matrix form we get (38). Finally, we notice that from (37) and (38) we deduce
Now, using (36) we can write the second equation as
A similar argument leads to the second relation in (39). \(\square \)
Definition 32
For the Hankel masses, we also consider the matrices \(\mathcal T_i^{(a)}\in {\mathbb {C}}^{p\kappa ^{(a)}_i\times \alpha _a}\), \(\mathcal T^{(a)}\in {\mathbb {C}}^{p\alpha _a\times \alpha _a}\) and \(\mathcal T\in {\mathbb {C}}^{Np^2\times Np}\) given by
2.4 Spectral Christoffel–Geronimus formulas
Proposition 31
If \(n\ge N\), the matrix coefficients of the connection matrix satisfy
Proof
From the connection formula (24), for \(n\ge N\)
and we conclude that
Similarly, using the equation (21), we get
Now, from (39) we deduce
that is to say
\(\square \)
Remark 7
In the next results, the jets of the Christoffel–Darboux kernels are considered with respect to the first variable x, and we treat the y-variable as a parameter.
Theorem 2
(Spectral Christoffel–Geronimus formulas) When \(n\ge N\), for monic Geronimus perturbations, with masses as described in (29), we have the following last quasideterminantal expressions for the perturbed biorthogonal matrix polynomials and its matrix norms
Proof
First, we consider the expressions for \( {\check{P}}^{[1]}_{n}(x)\) and \({\check{H}}_{n}\). Using relation (21) we have
from Proposition 31 we obtain
and the result follows. To get the transformation for the H’s we proceed as follows. From (20) we deduce
But, according to Proposition 31, we have
Hence,
We now prove the result for \(\Big ({\check{P}}^{[2]}_{n}(y)\Big )^\top \). On one hand, according to Definition 12 we rewrite (28) as
Therefore, the corresponding spectral jets do satisfy
and, recalling (39), we conclude that
On the other hand, from (27) we realize that
which can be subtracted to (42) to get
Hence, we obtain the formula
Now, for \(n\ge N\), from Definition 26 and the fact that \(\omega _{n,n-N}={\check{H}}_n\big (H_{n-N}\big )^{-1}\), we get
and the result follows. \(\square \)
2.5 Nonspectral Christoffel–Geronimus formulas
We now present an alternative orthogonality relations approach for the derivation of Christoffel type formulas, that avoids the use of the second kind functions and of the spectral structure of the perturbing polynomial. A key feature of these results is that they hold even for perturbing matrix polynomials with singular leading coefficient.
Definition 33
For a given perturbed matrix of generalized kernels \(\check{u}_{x,y}= u_{x,y}\big (W(y)\big )^{-1}+v_{x,y}\), with \(v_{x,y}W(y)=0_{p}\), we define a semi-infinite block matrix
Remark 8
Its blocks are \(R_{n,l}=\left\langle {P^{[1]}_n(x),I_py^l}\right\rangle _{\check{u}}\in {\mathbb {C}}^{p\times p}\). Observe that for a Geronimus perturbation of a Borel measure \({\text {d}}\mu (x,y)\), with general masses as in (29) we have
that, when the masses are discrete and supported by the diagonal \(y=x\), reduces to
Proposition 32
The following relations hold true
Proof
Equation (44) follows from Definition 33. Indeed,
To deduce (45) we recall (16), (44), and the Gauss factorization of the perturbed matrix of moments
Finally, to get (46), we use (17) together with (45), which implies \(\omega =\omega R W(\Lambda ^\top ) \big (S_2)^\top H^{-1}\), and as the resolvent is unitriangular with a unique inverse matrix [14], we obtain the result. \(\square \)
From (45) it immediately follows that
Proposition 33
The matrix R fulfills
Proposition 34
The matrix is nonsingular.
Proof
From (44) we conclude for the corresponding truncations that \(R_{[n]}=(S_1)_{[n]}{\check{G}}_{[n]}\) is nonsingular, as we are assuming, to ensure the orthogonality, that \({\check{G}}_{[n]}\) is nonsingular for all \(n\in \{1,2,\dots \}\). \(\square \)
Definition 34
Let us introduce the polynomials \( r^K_{n,l}(z)\in \mathbb C^{p\times p}[z]\), \(l\in \{0,\dots ,n-1\}\), given by
Proposition 35
For \(l\in \{0,1,\dots ,n-1\}\) and \(m=\min (n,N)\) we have
Proof
It follows from (27), Definition 33, and (14). \(\square \)
Definition 35
For \(n\ge N\), given the matrix
we construct a submatrix of it by selecting Np columns among all the np columns. For that aim, we use indexes (i, a) labeling the columns, where i runs through \(\{0,\dots ,n-1\}\) and indicates the block, and \(a\in \{1,\dots , p\}\) denotes the corresponding column in that block; i.e., (i, a) is an index selecting the a-th column of the i-block. Given a set of N different couples \(I=\{(i_r,a_r)\}_{r=1}^{N}\), with a lexicographic ordering, we define the corresponding square submatrix \(R_n^{\square }:=\big [{\mathfrak {c}}_{(i_1,a_1)},\dots , \mathfrak c_{(i_{Np},a_{Np})}\big ]\). Here \({\mathfrak {c}}_{(i_r,a_r)}\) denotes the \(a_r\)-th column of the matrix
The set of indexes I is said poised if \(R_n^{\square }\) is nonsingular. We also use the notation where \(r_n^{\square }:=\big [\tilde{{\mathfrak {c}}}_{(i_1,a_1)},\dots , \tilde{{\mathfrak {c}}}_{(i_{Np},a_{Np})}\big ]\). Here \(\tilde{\mathfrak c}_{(i_r,a_r)}\) denotes the \(a_r\)-th column of the matrix \( R_{n,i_r}\). Given a poised set of indexes we define \((r^K_{n}(y))^\square \) as the matrix built up by taking from the matrices \(r^K_{n,i_r}(y)\) the columns \(a_r\).
Lemma 7
For \(n\ge N\), there exists at least a poised set.
Proof
For \(n\ge N\), we consider the rectangular block matrix
As the truncation \(R_{[n]}\) is nonsingular, this matrix is full rank, i.e., all its Np rows are linearly independent. Thus, there must be Np independent columns and the desired result follows. \(\square \)
Lemma 8
Whenever the leading coefficient \(A_N\) of the perturbing polynomial W(y) is nonsingular, we can decompose any monomial \(I_p y^l\) as
where \(\alpha _l(y),\beta _l(y)=\beta _{l,0}+\cdots +\beta _{l,N-1}y^{N-1}\in \mathbb C^{p\times p}[y] \), with \(\deg \alpha _l(y) \le l-N\).
Proposition 36
Let us assume that the matrix polynomial \(W(y)=A_Ny^N+\dots +A_0\) has a nonsingular leading coefficient and \(n\ge N\). Then, the set \(\{0,1,\dots ,N-1\}\) is poised.
Proof
From Proposition 33 we deduce
for \(l\in \{0,1,\dots ,n-1\}\). In particular, the resolvent vector \(\big [\omega _{n,n-N},\dots ,\omega _{n,n-1}\big ]\) is a solution of the linear system
We will show now that this is the unique solution to this linear system. Let us proceed by contradiction and assume that there is another solution, say \(\big [{\tilde{\omega }}_{n,n-N},\dots ,{\tilde{\omega }}_{n,n-1}\big ]\). Consider then the monic matrix polynomial
Because \(\big [{\tilde{\omega }}_{n,n-N},\dots ,{\tilde{\omega }}_{n,n-1}\big ]\) solves (47) we know that
Lemma 8 implies the following relations for \(\deg \alpha _l(y)<m\),
But \(\deg \alpha _l(y)\le l-N\), so that the previous equation will hold at least for \(l-N<m\); i.e., \(l<m+N\). Consequently, for \(l\in \{0,\dots ,n-1\}\), we find
Therefore, from the uniqueness of the biorthogonal families, we deduce \({{\tilde{P}}}_n(x)={\check{P}}^{[1]}_n(x)\), and, recalling (21), there is a unique solution of (47). Thus,
is nonsingular, and \(I=\{0,\dots ,N-1\}\) is a poised set. \(\square \)
Proposition 37
For \(n\ge N\), given poised set, which always exists, we have
Proof
It follows from Proposition 33. \(\square \)
Theorem 3
(Non-spectral Christoffel–Geronimus formulas) Given a matrix Geronimus transformation the corresponding perturbed polynomials, \(\{{\check{P}}^{[1]}_{n}(x)\}_{n=0}^\infty \) and \(\{\check{P}^{[2]}_{n}(y)\}_{n=0}^\infty \), and matrix norms \(\{\check{H}_{n}\}_{n=0}^\infty \) can be expressed as follows. For \(n\ge N\),
and two alternative expressions
Proof
For \(m=\min (n,N)\), from the connection formula (18) we have
and from Proposition 33 we deduce
and use (41). Then, recalling Proposition 37 we obtain the desired formulas for \(\check{P}^{[1]}_n(x)\) and \({\check{H}}_n\).
For \(n\ge N\), we have
so that
In particular, recalling (20), we deduce that
\(\square \)
2.6 Spectral versus nonspectral
Definition 36
We introduce the truncation given by taking only the first N columns of a given semi-infinite matrix
Then, we can connect the spectral methods and the nonspectral techniques as follows
Proposition 38
The following relation takes place
Proof
From (24) we deduce that
Taking the corresponding root spectral jets, we obtain
that, together with (39), gives
Now, relation (45) implies
But, given that \(\omega \) is a lower unitriangular matrix, and therefore with an inverse, see [14], the unique solution to \(\omega X=0\), where X is a semi-infinite matrix, is \(X=0\). \(\square \)
We now discuss an important fact, which ensures that the spectral Christoffel–Geronimus formulas presented in previous sections make sense
Corollary 1
If the leading coefficient \(A_N\) is nonsingular and \(n\ge N\), then
is nonsingular.
Proof
From Proposition 38 one deduces the following formula
Now, Proposition 36 and Lemma 2 lead to the result. \(\square \)
We stress at this point that (48) connects the spectral and the nonspectral methods. Moreover, when we border with a further block row we obtain
2.7 Applications
2.7.1 Unimodular Christoffel perturbations and nonspectral techniques
The spectral methods apply to those Geronimus transformations with a perturbing polynomial W(y) having a nonsingular leading coefficient \(A_N\). This was also the case for the techniques developed in [3] for matrix Christoffel transformations, where the perturbing polynomial had a nonsingular leading coefficient. However, we have shown that despite we can extend the use of the spectral techniques to the study of matrix Geronimus transformations, we also have a nonspectral approach applicable even for singular leading coefficients. For example, some cases that have appeared several times in the literature –see [21]– are unimodular perturbations and, consequently, with W(y) having a singular leading coefficient. In this case, we have that \((W(y))^{-1}\) is a matrix polynomial, and we can consider the Geronimus transformation associated with the matrix polynomial \((W(y))^{-1}\) –as the spectrum is empty \(\sigma (W(y))=\varnothing \), no masses appear– as a Christoffel transformation with perturbing matrix polynomial W(y) of the original matrix of generalized kernels
We can apply Theorem 3 with
For example, when the matrix of generalized kernels is a matrix of measures \(\mu \), we can write
Here W(x) is a Christoffel perturbation and \(\deg ((W(x))^{-1})\) gives you the number of original orthogonal polynomials required for the Christoffel type formula. Theorem 3 can be nicely applied to get \({\check{P}}^{[1]}_n(x)\) and \({\check{H}}_n\). However, it only gives Christoffel–Geronimus formulas for \(\big ({\check{P}}^{[2]}_n(y)\big )^\top A_N\) and given that \(A_N\) is singular, we only partially recover \({\check{P}}^{[2]}_n(y)\). This problem disappears whenever we have symmetric generalized kernels \(u_{x,y}=(u_{y,x})^\top \), see Remark 3, as then \( P_n^{[1]}(x)= P_n^{[2]}(x)=:P_n(x)\) and biorthogonality collapses to orthogonality of \(\{P_n(x)\}_{n=0}^\infty \). From (49), we need to require
that when the initial matrix of kernels is itself symmetric \(u_{x,y}= (u_{y,x})^\top \) reads \(u_{x,y}W(y)= (W(x))^\top u_{x,y}\). Now, if we are dealing with Hankel matrices of generalized kernels \(u_{x,y}=u_{x,x}\) we find \(u_{x,x,}W(x)= (W(x))^\top u_{x,x}\), that for the scalar case reads \(u_{x,x}=u_0I_p\) with \(u_0\) a generalized function we need W(x) to be a symmetric matrix polynomial. For this scenario, if \(\{p_n(x)\}_{n=0}^\infty \) denotes the set of monic orthogonal polynomials associated with \(u_{0}\), we have \(R_{n,l}=\big \langle u_{0} , p_n(x)W(x)x^l\big \rangle \).
For example, if we take \(p=2\), with the unimodular perturbation given by
we have, that the inverse is the following matrix polynomial
where \(\det W(x)\) is a constant, and the inverse has also degree 2. Therefore, for \(n\in \{2,3,\dots \}\), we have the following expressions for the perturbed matrix orthogonal polynomials
and the corresponding matrix norms or quasitau matrices are
Here the natural numbers k and l satisfy \(0\le k<l\le n-1\) and are among those (we know that they do exist) that fulfil
Observe that the case of size \(p=2\) unimodular matrix polynomials is particularly simple, because the degree of the perturbation and its inverse coincide. However, for bigger sizes this is not the case. For a better understanding, let us recall that unimodular matrices always factorize in terms of elementary matrix polynomials and elementary matrices, which are of the following form
-
(i)
Elementary matrix polynomials: \(e_{i,j}(x)=I_p+E_{i,j}p(x)\) with \(i\ne j\) and \(E_{i,j}\) the matrix with a 1 at the (i, j) entry and zero elsewhere, and \(p(x)\in {\mathbb {C}}[x]\).
-
(ii)
Elementary matrices:
-
(a)
\(I_p+(c-1)E_{i,i}\) with \(c\in {\mathbb {C}}\).
-
(b)
\(\eta ^{(i,j)}=I_p-E_{i,i}-E_{j,j}+E_{i,j}+E_{j,i}\): the identity matrix with the i-th and j-th rows interchanged.
-
(a)
The inverses of these matrices are elementary again
and the inverse of a general unimodular matrix polynomial can be computed immediately once its factorization in terms of elementary matrices is given. However, the degree of the matrix polynomial and its inverse requires a separate analysis.
If our perturbation \(W(x)=I_p+p(x)E_{i,j}\) is an elementary matrix polynomial, with \(\deg p(x)=N\), then we have that \((W(x))^{-1}=I_p-p(x)E_{i,j}\) and \(\deg W(x)=\deg ((W(x))^{-1})=N\). If we assume a departing matrix of generalized kernels \(u_{x,y}\), for \(n\ge N\), the first family of perturbed polynomials will be
Here, the sequence of different integers \(\{k_1,\dots ,k_N\}\subset \{1,\dots ,n-1\}\) is such that
A bit more complex situation appears when we have the product of different elementary matrix polynomials, for example
which has two possible forms depending on whether \(j_1\ne i_2\) or \(j_1= i_2\)
so that
For the inverse, we find
and
Thus, if either \(j_1\ne i_2\) and \( j_2\ne i_1\), or when \(j_1= i_2\) and \( j_2= i_1\), the degrees W(x) and \((W(x))^{-1}\) coincide, for \(j_1= i_2\) and \( j_2\ne i_1\) we find \(\deg W(x)>\deg ((W(x))^{-1})\) and when \(j_1\ne i_2\) and \( j_2= i_1\) we have \(\deg W(x)<\deg ((W(x))^{-1})\). Consequently, the degrees of unimodular matrix polynomials can be bigger than, equal to or smaller than the degrees of its inverses.
We will be interested in unimodular perturbations W(x) that factorize in terms of K elementary polynomial factors \(\{e_{i_m,j_m}(x)\}_{m=1}^K\) and L exchange factors \(\{\eta ^{(l_n,q_n)}\}_{n=1}^L\). We will use the following notation for elementary polynomials and elementary matrices
suited to take products among them, according to the product table
Bearing this in mind, we denote all the possible permutations of a vector with K entries, having i out of these equal to 1 and the rest equal to zero, by \(\sigma _{i}^K=\big \{{\sigma }_{i,j}^{K}\big \}_{j=1}^{|\sigma _{i}^K|}\) with \({\sigma }_{i,j}^{K}=\begin{pmatrix} ({\sigma }_{i,j}^{K})_1, \dots , ({\sigma }_{i,j}^{K})_K \end{pmatrix}\in ({\mathbb {Z}}_2)^K\) where \(({\sigma }_{i,j}^{K})_r \in {\mathbb {Z}}_2:=\{1,0\}\) and \(|\sigma _{i}^K|=\begin{pmatrix} K \\ i \end{pmatrix}\) we can rewrite a given unimodular perturbation as a sum. Actually, any unimodular polynomial that factorizes in terms of K elementary polynomials \(e_{i,j}(x)\) and L elementary matrices \(\eta ^{(l,q)}\), in a given order, can be expanded into a sum of \(2^K\) terms
where \((i,j)_{p_{i,j}}^0={\mathbb {I}}_p\). Notice that although in the factorization of W we have assumed that it starts and ends with elementary polynomials, the result would still be valid if it started and/or ended with an interchange elementary matrix \(\eta \). We notationally simplify these type of expressions by considering the sequences of couples of natural numbers \(\{i_1,j_1\}\,\{(i_2,j_2\}),\dots ,\{i_k,j_k\}\big \}\), where \(\{n,m\}\) stands either for \((n,m)_{p_{m,n}}\) or [m, n], and identifying paths. We say that two couples of naturals \(\{k,l\}\) and \(\{n,m\}\) are linked if \(l=n\). When we deal with a couple [n, m] the order is not of the natural numbers is not relevant, for example (k, l) and [l, m] are linked as well as (k, l) and [m, l] are linked. A path of length l is a subset of I of the form
The order of the sequence is respected for the construction of each path. Thus, the element \((a_i,a_{i+1})\), as an element of the sequence I, is previous to the element \((a_{i+1}, a_{i+2})\) in the sequence. A path is proper if it does not belong to a longer path. Out of the \(2^K\) terms that appear only paths remain. In order to know the degree of the unimodular polynomial one must check the factors of the proper paths, and look for the maximum degree involved in those factors . For a better understanding let us work out a couple of significant examples. These examples deal with non symmetric matrices and, therefore, we have complete Christoffel type expressions for \({\check{P}}^{[1]}_n(x)\) and \({\check{H}}_n\), but also the mentioned penalty for \(P^{[2]}_n(x)\). Firstly, let us consider a polynomial with \(K=5\), \(L=0\) and \(p=6\),
in terms of sequences of couples the paths for this unimodular polynomial has the following structure
where \(\{I_6\}_{i=0}\) indicates that the product not involving couples produces the identity matrix (in general will be a product of interchanging matrices) and we have underlined the proper paths. Thus
Its inverse is
and the paths are
Thus,
Then, looking at the proper paths, we find
For example, if we assume that
we get for the corresponding unimodular matrix polynomial and its inverse
so that, for example, the first family of perturbed biorthogonal polynomials, for \(n\ge 3\) is
Here, the sequence of different integers \(\{k_1,k_2,k_3\}\subset \{1,\dots ,n-1\}\) is such that
Let us now work out a polynomial with \(K=L=4\) and \(p=5\). The unimodular matrix polynomial we consider is
The paths are
so that
The inverse matrix is
with paths given by
and, consequently,
Proper paths, which we have underlined, give the degrees of the polynomials
For example, if we assume that
we find \(\deg W(x)=\deg ((W(x))^{-1})=3\) and formula (50) is applicable for W(x) as given in (51).
If we seek for symmetric unimodular polynomials of the form
where V(x) is a unimodular matrix polynomial. For example, we put \(p=4\), and consider
in such a way the perturbing symmetric unimodular matrix polynomial is
Let us assume that
then
Now, we take a scalar matrix of linear functionals \(u=u_{0} I_p\), with \(u_{0}\in \big ({\mathbb {R}}[x]\big )'\) positive definite, and assume that the polynomials \(p_{1,2}(x), p_{2,3}(x),p_{3,4}(x)\in {\mathbb {R}}[x]\). Then, we obtain matrix orthogonal polynomials \(\{P_n(x)\}_{n=0}^\infty \) for the matrix of linear functionals \(W(x)u_0\), which in terms of the sequence of scalar orthogonal polynomials \(\{p_n(x)\}_{n=0}^\infty \) of the linear functional \(u_0\) are, for \(n\ge 4\)
The set \(\{k_1,k_2,k_3,k_4\}\subset \{1,\dots ,n-1\}\) is such that
2.7.2 Degree one matrix Geronimus transformations
We consider a degree one perturbing polynomial of the form
and assume, for the sake of simplicity, that all \(\xi \) are taken zero, i.e., there are no masses. Observe that in this case a Jordan pair (X, J) is such that \(A=XJX^{-1}\), and Lemma 1 implies that the root spectral jet of a polynomial \(P(x)=\sum _kP_kx^k\in {\mathbb {C}}^{p\times p}[x]\) is \(\varvec{{\mathcal {J}}}_P=P(A)X\), where we understand a right evaluation, i.e., \(P(A):=\sum _{k}P_k A^k\). An similar argument, for \(\sigma (A)\cap {\text {supp}}_y(u)=\varnothing \), yields
expressed in terms of the resolvent \((A-I_py)^{-1}\) of A. Formally, it can be written
where we again understand a right evaluation in the Taylor series of the Cauchy transform. Moreover, we also need the root spectral jet of the mixed Christoffel–Darboux kernel
that for a Hankel generalized kernel \(u_{x,y}\), using the Christoffel–Darboux formula for mixed kernels, reads
We also have \({\mathcal {V}}(x,y)=I_p\) so that \(\varvec{\mathcal J}_{{\mathcal {V}}}=X\).
Thus, for \(n\ge 1\) we have
For a Hankel matrix of bivariate generalized functionals, i.e., with a Hankel Gram matrix so that the Christoffel–Darboux formula holds, we have
Notes
Understood as a prolongation problem, see §5 in [66], we have similar results if we require \({\mathcal {L}}_u:{\mathcal {O}}_M\rightarrow {\mathcal {O}}_c'\) or \(\mathcal L_u:{\mathcal {O}}_c\rightarrow {\mathcal {O}}_c'\) or any other possibility that makes sense for polynomials and support.
In [58] it is defined as the Schur complement with respect a big block built up by the blocks determined by the indices I.
References
Adler, M., van Moerbeke, P.: Generalized orthogonal polynomials, discrete KP and Riemann–Hilbert problems. Commun. Math. Phys. 207, 589–620 (1999)
Álvarez-Fernández, C., Mañas, M., Fidalgo Prieto, U.: The multicomponent 2D Toda hierarchy: generalized matrix orthogonal polynomials, multiple orthogonal polynomials and Riemann–Hilbert problems. Inverse Probl. 26, 055009 (2010)
Álvarez Fernández, C., Ariznabarreta, G., García-Ardila, J.C., Man̂as, M., Marcellán, F.: Christoffel transformations for matrix orthogonal polynomials in the real line and the non-Abelian 2D Toda lattice hierarchy. Int. Math. Res. Not. 2017, 1285–1341 (2017)
Álvarez-Nodarse, R., Durán, A.J., Martínez de los Ríos, A.: Orthogonal matrix polynomials satisfying second order difference equation. J. Approx. Theory 169, 40–55 (2013)
Aptekarev, A.I., Nikishin, E.M.: The scattering problem for a discrete Sturm–Liouville operator. Math. USSR Sb. 49, 325–355 (1984)
Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellàn, F.: Non-abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations. J. Phys. A Math. Theor. 51, 205204 (2018)
Ariznabarreta, G., Mañas, M.: Matrix orthogonal Laurent polynomials on the unit circle and Toda type integrable systems. Adv. Math. 264, 396–463 (2014)
Ariznabarreta, G., Mañas, M.: Multivariate orthogonal polynomials and integrable systems. Adv. Math. 302, 628–739 (2016)
Bueno, M.I., Marcellán, F.: Darboux transformation and perturbation of linear functionals. Linear Algebra Appl. 384, 215–242 (2004)
Cantero, M.J., Marcellán, F., Moral, L., Velázquez, L.: Darboux transformations for CMV matrices. Adv. Math. 298, 122–206 (2016)
Castro, M.M., Grünbaum, F.A.: Orthogonal matrix polynomials satisfying first order differential equations: a collection of instructive examples. J. Nonlinear Math. Phys. 12, 63–76 (2005)
Chihara, T.S.: An Introduction to Orthogonal Polynomials. Mathematics and Its Applications Series, vol. 13. Gordon and Breach Science Publishers, New York–London–Paris (1978)
Christoffel, E.B.: Über die Gaussische Quadratur und eine Verallgemeinerung derselben. Journal für die reine und angewandte Mathematik (Crelle’s journal) 55, 61–82 (1858). (in German)
Cooke, R.: Infinite Matrices and Sequences Spaces. MacMillan, London (1950). reprinted in Dover Books on Mathematics, Dover Publications, (2014)
Damanik, D., Pushnitski, A., Simon, B.: The analytic theory of matrix orthogonal polynomials. Surv. Approx. Theory 4, 1–85 (2008)
Derevyagin, M., Marcellán, F.: A note on the Geronimus transformation and Sobolev orthogonal polynomials. Numeric. Algorithms 67, 271–287 (2014)
Derevyagin, M., García-Ardila, J.C., Marcellán, F.: Multiple Geronimus transformations. Linear Algebra Appl. 454, 158–183 (2014)
Durán, A.J.: On orthogonal polynomials with respect to a definite positive matrix of measures. Can. J. Math. 47, 88–112 (1995)
Durán, A.J.: Markov’s theorem for orthogonal matrix polynomials. Can. J. Math. 48, 1180–1195 (1996)
Durán, A.J.: Matrix inner product having a matrix symmetric second order differential operator. Rocky Mt. J. Math. 27, 585–600 (1997)
Durán, A.J., Grünbaum, F.A.: A survey on orthogonal matrix polynomials satisfying second order differential equations. J. Comput. Appl. Math. 178, 169–190 (2005)
Etinghof, P., Gel’fand, I.M., Gel’fand, S., Retakh, V.S.: Factorization of differential operators, quasideterminants, and nonabelian Toda field equations. Math. Res. Lett. 4, 413–425 (1997)
Etinghof, P., Gel’fand, I.M., Gel’fand, S., Retakh, V.S.: Nonabelian integrable systems, quasideterminants, and Marchenko lemma. Math. Res. Lett. 5, 1–12 (1998)
Fuhrmann, P. A.: Orthogonal matrix polynomials and system theory. Rendiconti del Seminario Matematico Università e Politecnico di Torino , Special Issue, 68–124 (1987)
Gautschi, W.: An algorithmic implementation of the generalized Christoffel theorem. In: Hämmerlin, G. (ed.) Numerical Integration. International Series of Numerical Mathematics, vol. 57, pp. 89–106. Birkhäuser, Basel (1982)
Gautschi, W.: Orthogonal Polynomials Computation and Approximation. Numerical Mathematics and Scientific Computation. Oxford University Press, Oxford (2004)
Gel’fand, I.M., Shilov, G.E.: Generalized Functions. Volume I: Properties and Operations. Academic Press, New York (1964). Reprinted in the AMS Chelsea Publishing, American Mathematical Society, Providence, RI (2016)
Gel’fand, I.M.: Generalized Functions. Volume II: Spaces of Fundamental Solutions and Generalized Functions. Academic Press, New York (1968). Reprinted in the AMS Chelsea Publishing, American Mathematical Society, Providence, RI (2016)
Gel’fand, I.M., Gel’fand, S., Retakh, V.S., Wilson, R.: Quasideterminants. Adv. Math. 193, 56–141 (2005)
Gel’fand, I.M., Krob, D., Lascoux, A., Leclerc, B., Retakh, V.S., Thibon, J.-Y.: Noncommutative symmetric functions. Adv. Math. 112, 218–348 (1995)
Gel’fand, I.M., Retakh, V.S.: Determinants of matrices over noncommutative rings. Funct. Anal. Appl. 25, 91–102 (1991)
Gel’fand, I.M., Retakh, V.S.: Theory of noncommutative determinants, and characteristic functions of graphs. Funct. Anal. Appl. 26, 231–246 (1992)
Gel’fand, I.M., Retakh, V.S.: Quasideterminants. I. Selecta Math. (New Ser.) 3, 517–546 (1997)
Geronimo, J.S.: Scattering theory and matrix orthogonal polynomials on the real line. Circuits Syst. Signal Process 1, 471–495 (1982)
Geronimus, J.: On polynomials orthogonal with regard to a given sequence of numbers and a theorem by W. Hahn. Izvestiya Akademii Nauk SSSR 4, 215–228 (1940). (in Russian)
Gilson, C.R., Macfarlane, S.R.: Dromion solutions of noncommutative Davey–Stewartson equations. J. Phys. A Math. Theor. 42, 235202 (2009)
Gilson, C.R., Nimmo, J.C., Ohta, Y.: Quasideterminant solutions of a non-Abelian Hirota–Miwa equation. J. Phys. A Math. Theor. 40, 12607–12617 (2007)
Gilson, C.R., Nimmo, J.C., Sooman, C.M.: On a direct approach to quasideterminant solutions of a noncommutative modified KP equation. J. Phys. A Math. Theor. 41, 085202 (2008)
Gohberg, I., Lancaster, P., Rodman, L.: Matrix Polynomials. Computer Science and Applied Mathematics. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York–London (1982)
Golinskii , L.: On the scientific legacy of Ya. L. Geronimus (to the hundredth anniversary). In: V.B. Priezzhev, V.P. Spiridonov (eds) Self-Similar Systems, Proceedings of the International Workshop, July 30–August 7, Dubna, Russia, pp. 273-281, Publishing Department, Joint Institute for Nuclear Research, Moscow Region, Dubna (1998)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued orthogonal polynomials of the Jacobi type. Indag. Math. 14, 353–366 (2003)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix valued orthogonal polynomials of the Jacobi type: the role of group representation theory. Ann. l’Inst. Fourier 55, 2051–2068 (2005)
Haynsworth, E.V.: On the Schur Complement. Basel Mathematical Notes (University of Basel), vol. 20 (June 1968)
Haynsworth, E.V.: Reduction of a matrix using properties of the Schur complement, Linear Algebra and its Applications. Linear Algebra Appl. 3, 23–29 (1970)
Hörmander, L.: The Analysis of Partial Differential Operators I. Distribution Theory and Fourier Analysis, 2nd edn. Springer, New York (1990)
Hahn, W.: Über die Jacobischen Polynome und zwei verwandte Polynomklassen. Math. Z. 39, 634–38 (1935). (in German)
Heyting, Arend: Die Theorie der linearen Gleichungen in einer Zahlenspezies mit nichtkommutativer Multiplikation. Math. Ann. 98, 465–490 (1928)
Krein, M.G.: The fundamental propositions of the theory of representations of Hermitian operators with deficiency index \((m, m)\). Ukr. Math. J. 1, 3–66 (1949). (in Russian)
Krob, D., Leclerc, B.: Minor identities for quasi-determinants and quantum determinants. Commun. Math. Phys. 169, 1–23 (1995)
Li, C.X., Nimmo, J.J.C.: Darboux transformations for a twisted derivation and quasideterminant solution to the super KdV equation. Proc. R. Soc. A 466, 2471–2493 (2010)
Marcellán, F., Xu, Y.: On Sobolev orthogonal polynomials. Expos. Math. 33, 308–352 (2015)
Markus, A.S.: Introduction to the spectral theory of polynomials operator pencil. Translated from the Russian by H. H. McFaden. Translation edited by Ben Silver. With an appendix by M. V. Keldysh. Translations of Mathematical Monographs, vol. 71, American Mathematical Society, Providence, RI (1988)
Maroni, P.: Sur quelques espaces de distributions qui sont des formes linéaires sur l’espace vectoriel des polynômes. In: Brezinski, C. (ed.) Orthogonal Polynomials and Their Applications. Lecture Notes in Mathematics, vol. 117, pp. 184–194. Springer, Berlin (1985). (in French)
Maroni, P.: Le calcul des formes linéaires et les polynômes orthogonaux semiclassiques. In: Alfaro, M., et al. (eds.) Orthogonal Polynomials and Their Applications. Lecture Notes in Mathematics, vol. 1329, pp. 279–290. Springer, Berlin (1988). (in French)
Maroni, P.: Sur la suite de polynômes orthogonaux associée ála forme \(u = \delta_c +\lambda (x-c)^{-1}L\). Period. Math. Hung. 21, 223–248 (1990). (in French)
Miranian, L.: Matrix valued orthogonal polynomials on the real line: some extensions of the classical theory. J. Phys. A Math. Gen. 38, 5731–5749 (2005)
Nikiforov, A.F., Suslov, S.K., Uvarov, V.B.: Classical Orthogonal Polynomials of a Discrete Variable. Springer, Berlin (1991)
Olver, P.J.: On multivariate interpolation. Stud. Appl. Math. 116, 201–240 (2006)
Ore, O.: Linear equations in non-commutative fields. Ann. Math. 32, 463–477 (1931)
Retakh, V., Rubtsov, V.: Noncommutative Toda chains, Hankel quasideterminants and Painlevé II equation. J. Phys. A Math. Theor. 43, 505204 (2010)
Richardson, A.R.: Hypercomplex determinants. Messenger Math. 55, 145–152 (1926)
Richardson, A.R.: Simultaneous linear equations over a division algebra. Proc. Lond. Math. Soc. 28, 395–420 (1928)
Rodman, L.: Orthogonal matrix polynomials. Orthogonal Polynomials (Columbus OH 1989). NATO Adv. Sci. Inst. Ser. C. Math. Phys. Sci., vol. 294, pp. 345–362. Kluwer, Dordrecht (1990)
Rowen, L.: Ring Theory, vol. I. Academic Press, San Diego (1988)
Schur, I.: Uber Potenzreihen, die im Innern des Einheitskreises beschrankt sind. Journal fiir die reine und angewandte Mathematik (Crelle’s Journal) 147, 205–232 (1917)
Schwartz, L.: Théorie des noyaux. In: Proceedings of the International Congress of Mathematicians(Cambridge, MA, 1950), vol. 1, pp. 220–230, American Mathematical Society, Providence, RI (1952)
Schwartz, L.: Théorie des distributions. Hermann, Paris (1978)
Simon, B.: The Christoffel–Darboux Kernel. In: D. Mitrea, M. Mitrea (eds.) Perspectives in Partial Differential Equations, Harmonic Analysis and Applications, Proceedings of Symposia in Pure Mathematics, vol. 79, pp. 295–346 (2008)
Sinap, A., Van Assche, W.: Polynomial interpolation and Gaussian quadrature for matrix-valued functions. Linear Algebra Appl. 207, 71–114 (1994)
Sinap, A., Van Assche, W.: Orthogonal matrix polynomials and applications. In: Proceedings of the Sixth International Congress on Computational and Applied Mathematics (Leuven, 1994). Journal of Computational Applied Mathematics 66, 27–52 (1996)
Szegő, G.: Orthogonal Polynomials. American Mathematical Society Colloquium Publication Series, vol. 23, 4th edn. American Mathematical Society, Providence, RI (1975)
Zhang, F. (ed.): The Schur Complement and Its Applications. Springer, New York (2005)
Zhedanov, A.: Rational spectral transformations and orthogonal polynomials. J. Comput. Appl. Math. 85, 67–86 (1997)
Acknowledgements
We thank the comments and suggestions by the referees. They have contributed to improve the presentation of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Ari Laptev.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Gerardo Ariznabarreta and Manuel Mañas: Thanks financial support from the Spanish “Ministerio de Economía y Competitividad” research project [MTM2015-65888-C4-3-P], Ortogonalidad, teoría de la aproximación y aplicaciones en física matemática; Gerardo Ariznabarreta: Thanks financial support from the Universidad Complutense de Madrid Program “Ayudas para Becas y Contratos Complutenses Predoctorales en España 2011”; Juan C. García-Ardila and Francisco Marcellán: Thanks financial support from the Spanish “Ministerio de Economía y Competitividad” research project [MTM2015-65888-C4-2-P], Ortogonalidad, teoría de la aproximación y aplicaciones en física matemática
Appendix A. Schur complements and quasideterminants
Appendix A. Schur complements and quasideterminants
We first notice that the Schur complement was not introduced by Issai Schur but by Emilie Haynsworth in 1968 in [43, 44]. Haynsworth coined that named because the Schur determinant formula given in what today is known as Schur lemma in [65]. For an ample overview on Schur complement and many of its applications [72] one can find an . The most easy examples of quasi-determinants are Schur complements. Gel’fand and collaborators have made many essential contributions to the subject, see [29] for an excellent survey on the subject. Peter Olver’s on a paper on multivariate interpolation, see [58], discusses an alternative interesting approach to the subject. In the late 1920 Richardson [61, 62], and Heyting [47], studied possible extensions of the determinant notion to division rings. Heyting defined the designant of a matrix with noncommutative entries, which for \(2\times 2\) matrices was the Schur complement, and generalized to larger dimensions by induction. Let us stress that both Richardson’s and Heyting’s quasi-determinants were generically rational functions of the matrix coefficients. In 1931, Ore [59] gave a polynomial proposal, the Ore’s determinant. A definitive impulse to the modern theory was given by the Gel’fand’s school [22, 23, 30,31,32,33]. Quasi-determinants where defined over free division rings and was early noticed that is not an analog of the commutative determinant but rather of a ratio determinants. A n essential aspect for quasi-determinants is the heredity principle, quasi-determinants of quasi-determinants are quasi-determinants; there is no analog of such a principle for determinants. Many of the properties of determinants extend to this case, see the cited papers and also [49] for quasi-minors expansions. Already in the early 1990 the Gelf’and school [31] it was noticed the role quasi-determinants for some integrable systems, see also [60] for some recent work in this direction regarding non-Abelian Toda and Painlevé II equations. Jon Nimmo and his collaborators, the Glasgow school, have studied the relation of quasi-determinants and integrable systems, in particular we can mention the papers [36,37,38, 50]. All this paved the route, using the connection with orthogonal polynomials à la Cholesky, to the appearance of quasi-determinants in the multivariate orthogonality context. Later, in 2006 Olver [58] applied quasi-determinants to multivariate interpolation.
1.1 A.1 Schur complements
Given \(M= \left( {\begin{matrix} A &{} \quad B\\ C &{} \quad D \end{matrix}}\right) \) in block form the Schur complement with respect to A (if \(\det A\ne 0\)) is
The Schur complement with respect to D (if \(\det D\ne 0\)) is
Observe that we have the block Gauss factorization
implies the Schur determinant formula \(\det M=\det (A)\det (M/ A)\). This is in fact the Schur lemma in a disguise form, in fact Schur lemma in [65] assumes that \([A,C]=0\) so that \(\det M=\det (AD-BC)\). In terms of the Schur complements we have the following well known expressions for the inverse matrices
1.2 A.2 Quasi-determinants and the heredity principle
Given any partitioned matrix where \(A_{i,j}\in {\mathbb {R}}^{m_i\times m_j}\) for \(i,j\in \{1,\dots ,k-1\}\), and \(A_{k,k}\in {\mathbb {R}}^{\kappa _1\times \kappa _2}\),\(A_{i,k}\in {\mathbb {R}}^{m_i\times \kappa _2}\) and \(A_{k,j}\in {\mathbb {R}}^{\kappa _1\times m_j}\), we are going to define its quasi-determinant à la Olver recursively. We start with \(k=2\), so that , in this case the first quasi-determinant is different to that of the Gel’fand school where . There is another quasi-determinant , the other Schur complement, and we need \(A_{2,2}\) to be a invertible square matrix. Other quasi-determinants that can be considered for regular square blocks are and .
Following [58] we remark that quasi-determinantal reduction is a commutative operation. This is the heredity principle formulated by Gel’fand and Retakh [29, 33]: quasi-determinants of quasi-determinants are quasi-determinants. Let us illustrate this by reproducing a nice example discussed in [58]. We consider the matrix and take the quasi-determinant with respect the first diagonal block, which we define as the Schur complement indicated by the non dashed lines, to get a matrix with blocks with subindexes involving 2 and 3 but not 1. Notice also, that us we are allowed to take blocks of different sizes we have taken the quasi-determinant with respect to a bigger block, composed of two rows and columns of basic blocks. This is the Olver’s generalization of Gel’fand’s et al. construction. Now, we take the quasi-determinant given by the Schur complement as indicated by the dashed lines, to get
We are ready to compute, for the very same matrix
the quasi-determinant associated to the two first diagonal blocks, that we label as \(\{1, 2\}\); i.e., the Schur complement indicated by the non-dashed lines in (56), to get
But recalling (53)
we get
which is identical to (54), so that
Given any set \(I=\{i_1,\dots ,i_m\}\subset \{1,\dots ,k\}\) the heredity principle allows us to define the quasi-determinantFootnote 2
and the \(\ell \)-th quasi-determinant is
The last quasi-determinant is denoted by
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ariznabarreta, G., García-Ardila, J.C., Mañas, M. et al. Matrix biorthogonal polynomials on the real line: Geronimus transformations. Bull. Math. Sci. (2018). https://doi.org/10.1007/s13373-018-0128-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13373-018-0128-y
Keywords
- Matrix biorthogonal polynomials
- Spectral theory of matrix polynomials
- Quasidefinite matrix of generalized kernels
- Nondegenerate continuous sesquilinear forms
- Gauss–Borel factorization
- Matrix Geronimus transformations
- Matrix linear spectral transformations
- Christoffel type formulas
- Quasideterminants
- Spectral jets
- Unimodular matrix polynomials