1 Introduction

Given a vector of measures (μ0μ1,⋯, μm) such that μk is supported on a set Ek, k = 0, 1,⋯, m, of the real line, consider the Sobolev inner product

$$\langle f, g\rangle_{S}=\sum\limits_{k=0}^{m}{\int}_{E_{k}} f^{(k)}(x) g^{(k)}(x) d\mu_{k}(x).$$

Several examples of sequences of orthogonal polynomials with respect to the above inner products have been studied in the literature (see [23] as a recent survey):

  1. 1.

    When Ek, k = 0, 1,⋯,m, are infinite subsets of the real line. (Continuous Sobolev)

  2. 2.

    When E0 is an infinite subset of the real line and Ek, k = 1,⋯, m, are finite subsets (Sobolev-type)

  3. 3.

    When Em is an infinite subset of the real line and Ek, k = 0,⋯, m − 1, are finite subsets.

In the above cases, the three-term recurrence relation that every sequence of orthogonal polynomials (with respect to a measure supported on an infinite subset of the real line) satisfies, does not hold anymore. This is a direct consequence of the fact that the multiplication operator by x is not symmetric with respect to any of the above mentioned situations.

In the Sobolev-type case, you get a multiplication operator by a polynomial intimately related with the support of the discrete measures. In [17], an illustrative example when dμ0 = xαexdx, α > − 1, \(x\in \lbrack 0,+\infty )\) and dμk(x) = Mkδ(x), Mk ≥ 0, k = 1, 2,⋯, m, has been studied. In general, there exists a symmetric multiplication operator for a general Sobolev inner product if and only if the measures μ1,⋯,μm are discrete (see [10]). On the other hand, in [8] the study of the general inner product such that the multiplication operator by a polynomial is a symmetric operator with respect to the inner product has been done. The representation of such inner products is given as well as the associated inner product. Assuming some extra conditions, you get a Sobolev-type inner product. Notice that there is an intimate relation about these facts and higher-order recurrence relations that the sequences of orthonormal polynomials with respect to the above general inner products satisfy. A connection with matrix orthogonal polynomials has been stated in [9].

When we deal with the Sobolev-type inner product, a lot of contributions have emphasized on the algebraic properties of the corresponding sequences of orthogonal polynomials in terms of the polynomials orthogonal with respect to the measure μ0. The case m = 1 has been studied in [1], where representation formulas for the new family as well as the study of the distribution of their zeros have been analyzed. The particular case of Laguerre Sobolev-type orthogonal polynomials has been introduced and deeply analyzed in [19]. Outer ratio asymptotics when the measure belongs to the Nevai class and some extensions to a more general framework of Sobolev-type inner products have been analyzed in [22] and [20]. For measures supported on unbounded intervals, asymptotic properties of Sobolev-type orthogonal polynomials have been studied for Laguerre measures (see [14, 24]) and, in a more general framework, in [21].

The aim of our contribution is to analyze the higher-order recurrence relation that a sequence of Sobolev-type orthonormal polynomials satisfies when you consider dμ0 = dμ + Mδ(xc) and dμ1 = Nδ(xc), where MN are nonnegative real numbers. In a first step, we obtain connection formulas between such Sobolev-type orthonormal polynomials and the standard ones associated with the measures dμ and (xc)2dμ, respectively. A matrix analysis of the five diagonal symmetric matrix associated with such a higher-order recurrence relation is presented taking into account the QR factorization of the shifted symmetric Jacobi associated with the orthonormal polynomials with respect to the measure dμ. The shifted Jacobi matrix associated with (xc)2dμ is RQ (see [4, 16]). Our approach is quite different and it is based on the iteration of the Cholesky factorization of the symmetric Jacobi matrices associated with dμ and (xc)dμ, respectively (see [2, 12]).

These polynomial perturbations of measures are known in the literature as Christoffel perturbations (see [13] and [29]). They constitute examples of linear spectral transformations. The set of linear spectral transformations is generated by Christoffel and Geronimus transformations (see [29]). The connection with matrix analysis appears in [6] and [7] in terms of an inverse problem for bilinear forms. On the other hand, Christoffel transformations of the above type are related to Gaussian rules as it is studied in [12]. For a more general framework about perturbations of bilinear forms and Hessenberg matrices as representations of a polynomial multiplication operator in terms of sequences of orthonormal polynomials associated with such bilinear forms, see [3].

The structure of the manuscript is as follows. Section 2 contains the basic background about polynomial sequences orthogonal with respect to a measure supported on an infinite set of the real line. We will call them standard orthogonal polynomial sequences. In Section 3, we present several connection formulas between the sequences of standard orthonormal polynomials associated with the measures dμ and (xc)2dμ and the orthonormal polynomials with respect to a Sobolev-type inner product. We give alternative proofs to those presented in [15]. In Section 4, we deduce the coefficients of the three-term recurrence relation for the orthonormal polynomials associated with the measure (xc)2dμ. In Section 5, we study the five-term recurrence relation that orthonormal polynomials with respect to the Sobolev-type inner product satisfy. Section 6 deals with the connection between the shifted Jacobi matrices associated with the measures dμ and (xc)2dμ in terms of QR factorizations. Then, taking into account the Cholesky factorization of the symmetric five diagonal matrix associated with the multiplication operator (xc)2 in terms of the Sobolev-type orthonormal polynomials by commuting the factors, we get the square of the shifted Jacobi matrix associated with the measure (xc)2dμ. Finally, in Section 7, we show an illustrative example in the framework of Laguerre-Sobolev-type inner products when c = − 1. Notice that in the literature, the authors have focused the interest in the case c = 0 and the analysis of the corresponding differential operator such that the above polynomials are their eigenfunctions (see [18, 25] and [26]).

2 Preliminaries

Let μ be a finite and positive Borel measure supported on an infinite subset E of the real line such that all the integrals

$$\mu_{n}={\int}_{E}x^{n}d\mu (x),$$

exist for n = 0, 1, 2,…. μn is said to be the moment of order nof the measure μ. The measure μ is said to be absolutely continuous with respect to the Lebesgue measure if there exists a non-negative function ω(x) such that dμ(x) = ω(x)dx.

In the sequel, let \(\mathbb {P}\) denote the linear space of polynomials in one real variable with real coefficients, and let {Pn(x)}n≥ 0 be the sequence of polynomials in \(\mathbb {P}\) with leading coefficient equal to one (monic OPS, or MOPS in short), orthogonal with respect to the inner product \(\langle \cdot ,\cdot \rangle _{\mu }:\mathbb {P}\times \mathbb {P}\rightarrow \mathbb {R}\) associated with μ

$$\langle f,g\rangle_{\mu }={\int}_{E}f(x)g(x)d\mu (x).$$
(1)

It induces the norm \(||f||_{\mu }^{2}=\langle f,f\rangle _{\mu }\). Under these considerations, these polynomials satisfy the following three-term recurrence relation (TTRR, in short)

$$xP_{n}(x)=P_{n+1}(x)+\beta_{n}P_{n}(x)+\gamma_{n}P_{n-1}(x),\quad n\geq 0,$$
(2)

where for every n ≥ 1, γn is a positive real number and βn, n ≥ 0 is a real number.

The n th reproducing kernel for ω(x) is

$$K_{n}(x,y)=\sum\limits_{k=0}^{n}\frac{P_{k}(x)P_{k}(y)}{||P_{k}||_{\mu }^{2}},\quad n\geq 0.$$
(3)

Because of the Christoffel-Darboux formula, see [5], it may also be expressed as

$$K_{n}(x,y)=\frac{1}{||P_{n}||_{\mu}^{2}}\frac{ P_{n+1}(x)P_{n}(y)-P_{n}(x)P_{n+1}(y)}{x-y},\quad n\geq 0.$$
(4)

The confluent formula becomes

$$K_{n}(x,x)=\sum\limits_{k=0}^{n}\frac{[P_{k}(x)]^{2}}{||P_{k}||_{\mu }^{2}}=\frac{P_{n+1}^{\prime}(x)P_{n}(x)-P_{n}^{\prime }(x)P_{n+1}(x)}{||P_{n}||_{\mu}^{2}},\quad n\geq 0.$$
(5)

We introduce the following usual notation for the partial derivatives of the n th reproducing kernel Kn(xy)

$$\frac{\partial^{j+k}K_n(x,\;y)}{\partial x^j\partial y^k}=K_n^{(j,k)}(x,\;y),\quad0\leq j,\;k\leq n.$$

We will use the expression of the first y-derivative of (3) evaluated at y = c

$$K_{n}^{(0,\;1)}(x,c)=\frac{1}{||P_{n}||_{\mu }^{2}}\times$$
$$\left[\frac{P_{n+1}(x)P_{n}(c)-P_{n}(x)P_{n+1}(c)}{(x-c)^{2}}+\frac{P_{n+1}(x)P_{n}^{\prime}(c)-P_{n}(x)P_{n+1}^{\prime}(c)}{x-c}\right], \quad n\geq 0,$$
(6)

and the following confluent formulas

$$K_{n}^{(0,\;1)}(c,c)=K_{n}^{(1,\;0)}(c,c)=\frac{1}{||P_{n}||_{\mu}^{2}}\left[\frac{P_{n}(c)P_{n+1}^{\prime\prime}(c)-P_{n+1}(c)P_{n}^{\prime\prime}(c)}{2} \right] ,\quad n\geq 0,$$
(7)
$$K_{n-1}^{(1,\;1)}(c,c)=\frac{1}{||P_{n}||_{\mu }^{2}}\times$$
$$\left[ \frac{P_{n}(c)P_{n+1}^{\prime \prime \prime }(c)-P_{n+1}(c)P_{n}^{\prime\prime\prime}(c)}{6}+\frac{P_{n}^{\prime}(c)P_{n+1}^{\prime\prime}(c)-P_{n+1}^{\prime}(c)P_{n}^{\prime\prime}(c) }{2}\right] ,\quad n\geq 0,$$
(8)

whose proof can be found in [15, Sec. 2.1.2].

We will denote by {pn(x)}n≥ 0 the orthonormal polynomial sequence with respect to the measure μ. Obviously,

$$p_{n}(x)=\frac{P_{n}(x)}{||P_{n}||_{\mu}}=r_{n}x^{n}+{it{lower degree terms}.}$$

Notice that

$$r_{n}=\frac{1}{||P_{n}||_{\mu}}.$$

Using orthonormal polynomials, the Christoffel-Darboux formula (4) reads

$$K_{n}(x,y)=\sum\limits_{k=0}^{n}p_{k}(x)p_{k}(y)=\frac{r_{n}}{r_{n+1}}\frac{p_{n+1}(x)p_{n}(y)-p_{n}(x)p_{n+1}(y)}{x-y}$$
(9)

and its confluent form is

$$K_{n}(x,x)=\sum\limits_{k=0}^{n}[p_{k}(x)]^{2}=\frac{r_{n}}{r_{n+1}}\left(p_{n+1}^{\prime}(x)p_{n}(x)-p_{n}^{\prime}(x)p_{n+1}(x)\right) .$$

Next, we define the Christoffel canonical transformation of a measure μ (see [2, 28] and [29]). Let μ be a positive Borel measure supported on an infinite subset \(E\subseteq \mathbb {R}\), and assume that c does not belong to the interior of the convex hull of E. Here and in the sequel, \(\{P_{n}^{[k]}(x)\}_{n\geq 0}\) will denote the MOPS with respect to the inner product

$$\langle f,g\rangle_{\lbrack k]}={\int}_{E}f(x)g(x)d\mu^{\lbrack k]},\quad d\mu^{\lbrack k]}=(x-c)^{k}d\mu ,\quad k\geq 0.$$
(10)

\(\{P_{n}^{[k]}(x)\}_{n\geq 0}\) is said to be the k-iterated Christoffel MOPS with respect to the above standard inner product. If k = 1, we have the Christoffel canonical perturbation of μ. It is well known that, in such a case, Pn(c)≠ 0, and (see [5, (7.3)])

$$P_{n}^{[1]}(x)=\frac{1}{(x-c)}\left[ P_{n+1}(x)-\frac{P_{n+1}(c)}{P_{n}(c)} P_{n}(x)\right] =\frac{\| P_{n}\|_{\mu }^{2}}{P_{n}(c)}K_{n}(x,c),$$

are the monic polynomials orthogonal with respect to the modified measure dμ[1]. They are known in the literature as monic kernel polynomials. If k > 1, then we have the k-iterated Christoffel transformation of dμ. In the sequel, we will denote

$$||P_{n}^{[k]}||_{[k]}^{2}={\int}_{E}[P_{n}^{[k]}(x)]^{2}(x-c)^{k}d\mu,$$

and \(x_{n,r}^{[k]},\) r = 1, 2,..., n, will denote the zeros of \(P_{n}^{[k]}(x)\) arranged in an increasing order. Since \(P_{n}^{[2]}(x)\) are the polynomials orthogonal with respect to (10) when k = 2 we have

$$(x-c)^{2}P_{n}^{[2]}(x)=\frac{\begin{vmatrix} P_{n+2}(x) & P_{n+1}(x) & P_{n}(x) \\ P_{n+2}(c) & P_{n+1}(c) & P_{n}(c) \\ P_{n+2}^{\prime}(c) & P_{n+1}^{\prime}(c) & P_{n}^{\prime }(c) \end{vmatrix} }{ \begin{vmatrix} P_{n+1}(c) & P_{n}(c) \\ P_{n+1}^{\prime }(c) & P_{n}^{\prime }(c) \end{vmatrix}}.$$
(11)

Notice that from (5) we have

$$K_{n}(c,c)=\sum\limits_{k=0}^{n}\frac{[P_{k}(c)]^{2}}{||P_{k}||_{\mu}^{2}}=\frac{P_{n+1}^{\prime}(c)P_{n}(c)-P_{n}^{\prime }(c)P_{n+1}(c)}{||P_{n}||_{\mu}^{2}}>0,$$

and hence the denominator in (11) is nonzero for every n ≥ 0. Thus, from (11), we get

$$(x-c)^{2}P_{n}^{[2]}(x)=P_{n+2}(x)-d_{n}P_{n+1}(x)+e_{n}P_{n}(x),$$
(12)

where

$$\begin{array}{@{}rcl@{}} d_{n} &=&\frac{P_{n+2}(c)P_{n}^{\prime}(c)-P_{n+2}^{\prime}(c)P_{n}(c)}{ P_{n+1}(c)P_{n}^{\prime}(c)-P_{n+1}^{\prime }(c)P_{n}(c)}, \\ e_{n} &=&\frac{P_{n+2}(c)P_{n+1}^{\prime}(c)-P_{n+2}^{\prime}(c)P_{n+1}(c) }{P_{n+1}(c)P_{n}^{\prime}(c)-P_{n+1}^{\prime}(c)P_{n}(c)} \\ &=&\frac{||P_{n+1}||_{\mu }^{2}}{||P_{n}||_{\mu }^{2}}\frac{K_{n+1}(c,\;c)}{ K_{n}(c,\;c)}=\frac{{r_{n}^{2}}}{r_{n+1}^{2}}\frac{K_{n+1}(c,\;c)}{K_{n}(c,\;c)}>0. \end{array}$$
(13)

Similar determinantal formulas can be obtained for k > 2. For orthonormal polynomials, the above expression reads

$$(x-c)^{2}\frac{p_{n}^{[2]}(x)}{r_{n}^{[2]}}=\frac{p_{n+2}(x)}{r_{n+2}}-d_{n} \frac{p_{n+1}(x)}{r_{n+1}}+e_{n}\frac{p_{n}(x)}{r_{n}}$$

or, equivalently.

$$(x-c)^{2}p_{n}^{[2]}(x)=\frac{r_{n}^{[2]}}{r_{n+2}}p_{n+2}(x)-d_{n}\frac{ r_{n}^{[2]}}{r_{n+1}}p_{n+1}(x)+e_{n}\frac{r_{n}^{[2]}}{r_{n}}p_{n}(x).$$
(14)

Furthermore, from [27, Theorem 2.5], we conclude that

$$||P_{n}^{[2]}||_{[2]}^{2}=-\frac{P_{n+1}^{[1]}(c)}{P_{n}^{[1]}(c)} ||P_{n}^{[1]}||_{[1]}^{2}=\frac{P_{n+1}^{[1]}(c)}{P_{n}^{[1]}(c)}\frac{ P_{n+1}(c)}{P_{n}(c)}||P_{n}||_{\mu }^{2} .$$

On the other hand, taking (12) into account

$$e_{n}=\frac{{\int}_{E}(x-c)^{2}P_{n}^{[2]}(x)P_{n}(x)d\mu }{ {\int}_{E}{P_{n}^{2}}(x)d\mu}=\frac{||P_{n}^{[2]}||_{[2]}^{2}}{||P_{n}||_{\mu}^{2}}=\frac{||P_{n+1}||_{\mu}^{2}}{||P_{n}||_{\mu}^{2}}\frac{K_{n+1}(c,c)}{K_{n}(c,c)}$$

which implies that

$$r_{n}^{[2]}=r_{n+1}\left(\frac{K_{n}(c,c)}{K_{n+1}(c,c)}\right)^{1/2}.$$
(15)

Replacing in (14), the orthonormal version of the connection formula (12) reads

$$(x-c)^{2}p_{n}^{[2]}(x)=$$
$$\begin{array}{@{}rcl@{}} &&\left(\frac{K_{n}(c,\;c)}{K_{n+1}(c,\;c)}\right)^{1/2}\times \left(\frac{ r_{n+1}}{r_{n+2}}p_{n+2}(x)-d_{n}p_{n+1}(x)+e_{n}\frac{r_{n+1}}{r_{n}} p_{n}(x)\right) \\ &=&\left(\frac{K_{n}(c,\;c)}{K_{n+1}(c,\;c)}\right)^{1/2}\times \left(\frac{ ||P_{n+2}||_{\mu }}{||P_{n+1}||_{\mu}}p_{n+2}(x)-d_{n}p_{n+1}(x)+e_{n}\frac{ ||P_{n}||_{\mu}}{||P_{n+1}||_{\mu}}p_{n}(x)\right) . \end{array}$$

In this paper, we will focus our attention on the following Sobolev-type inner product

$$\langle f,g\rangle_{S}={\int}_{E}f(x)g(x)d\mu +Mf(c)g(c)+Nf^{\prime}(c)g^{\prime}(c),\quad f, g\in \mathbb{P},$$
(16)

where μ is a positive Borel measure supported on \(E=[a,b]\subseteq \mathbb {R}\), cE, and M, N ≥ 0. In general, E can be a bounded or unbounded interval of the real line. Let \(\{S_{n}^{M,\;N}(x)\}_{n \geq 0}\) denote the monic orthogonal polynomial sequence (MOPS in short) with respect to (16). These polynomials are known in the literature as Sobolev-type or discrete Sobolev orthogonal polynomials. It is worth to point out that many properties of the standard orthogonal polynomials are lost when an inner product as (16) is considered.

3 The TTRR for the 2-iterated orthogonal polynomials

In order to obtain the corresponding symmetric Jacobi matrix, in this section, will find the coefficients of the three-term recurrence relation satisfied by the 2 −iterated orthonormal polynomials \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\). First, we deal with the monic orthogonal polynomials \(\{P_{n}^{[2]}(x)\}_{n\geq 0}\). Taking into account it is a standard sequence, we will have

$$x P_{n}^{[2]}(x)=P_{n+1}^{[2]}(x)+\kappa_{n}P_{n}^{[2]}(x)+\tau_{n}P_{n-1}^{[2]}(x),\quad n\geq 0,$$

where

$$\kappa_{n}=\frac{\langle x P_{n}^{[2]}(x),P_{n}^{[2]}(x)\rangle_{\lbrack 2]}}{\langle P_{n}^{[2]}(x),P_{n}^{[2]}(x)\rangle_{\lbrack 2]}},\qquad \tau_{n}=\frac{\langle x P_{n}^{[2]}(x),P_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}}{ \langle P_{n-1}^{[2]}(x),P_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}}.$$

In order to obtain the explicit expression of the above coefficients, we first study the numerator in κn. Taking into account (10) and (12), we have

$$\begin{array}{@{}rcl@{}} \langle x P_{n}^{[2]}(x),P_{n}^{[2]}(x)\rangle_{\lbrack 2]} &=&\langle x P_{n}^{[2]}(x),(x-c)^{2}P_{n}^{[2]}(x)\rangle \\ &=&\langle x P_{n}^{[2]}(x),P_{n+2}(x)\rangle -d_{n}\langle x P_{n}^{[2]}(x),P_{n+1}(x)\rangle +e_{n}\langle P_{n}^{[2]}(x),xP_{n}(x)\rangle \\ &=&-d_{n}||P_{n+1}||_{\mu }^{2}+e_{n}\langle P_{n}^{[2]}(x),xP_{n}(x)\rangle . \end{array}$$

Next, applying (2)

$$\begin{array}{@{}rcl@{}} \langle P_{n}^{[2]}(x),xP_{n}(x)\rangle &=&\langle P_{n}^{[2]}(x),P_{n+1}(x)\rangle +\beta_{n}\langle P_{n}^{[2]}(x),P_{n}(x)\rangle +\gamma_{n}\langle P_{n}^{[2]}(x),P_{n-1}(x)\rangle \\ &=& \beta_{n}||P_{n}||_{\mu }^{2}+\gamma_{n}\langle P_{n}^{[2]}(x),P_{n-1}(x)\rangle . \end{array}$$

Taking into account (12)

$$P_{n-1}(x)=\frac{1}{e_{n-1}}(x-c)^{2}P_{n-1}^{[2]}(x)-\frac{1}{e_{n-1}} P_{n+1}(x)+\frac{d_{n-1}}{e_{n-1}}P_{n}(x)$$

we obtain

$$\begin{array}{@{}rcl@{}} \langle P_{n}^{[2]}(x),P_{n-1}(x)\rangle &=&\langle P_{n}^{[2]}(x),\frac{1}{ e_{n-1}}(x-c)^{2}P_{n-1}^{[2]}(x)-\frac{1}{e_{n-1}}P_{n+1}(x)+\frac{d_{n-1}}{ e_{n-1}}P_{n}(x)\rangle \\ &=&\frac{1}{e_{n-1}}\langle P_{n}^{[2]}(x),P_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}-\frac{1}{e_{n-1}}\langle P_{n}^{[2]}(x),P_{n+1}(x)\rangle \\ &&+\frac{d_{n-1}}{e_{n-1}}\langle P_{n}^{[2]}(x),P_{n}(x)\rangle \\ &=&\frac{d_{n-1}}{e_{n-1}}||P_{n}||_{\mu }^{2}. \end{array}$$

Thus,

$$\langle x P_{n}^{[2]}(x),P_{n}^{[2]}(x)\rangle_{\lbrack 2]}=\left(\beta_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}}\right) e_{n}||P_{n}||_{\mu }^{2}-d_{n}||P_{n+1}||_{\mu }^{2}.$$

Next, we study the denominator in the expression of κn. From (12), we have

$$\begin{array}{@{}rcl@{}} \langle P_{n}^{[2]}(x),P_{n}^{[2]}(x)\rangle_{\lbrack 2]} &=&\langle P_{n}^{[2]}(x),(x-c)^{2}P_{n}^{[2]}(x)\rangle \\ &=&\langle P_{n}^{[2]}(x),P_{n+2}(x)\rangle -d_{n}\langle P_{n}^{[2]}(x),P_{n+1}(x)\rangle \\ &&+e_{n}\langle P_{n}^{[2]}(x),P_{n}(x)\rangle \\ &=&e_{n}||P_{n}||_{\mu}^{2}. \end{array}$$

Hence,

$$\begin{array}{@{}rcl@{}} \kappa_{n} &=&\frac{\left(\beta_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}} \right) e_{n}||P_{n}||_{\mu }^{2}-d_{n}||P_{n+1}||_{\mu }^{2}}{ ||P_{n}^{[2]}||_{[2]}^{2}} \\ &=&\left(\beta_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}}\right) e_{n}\left(\frac{r_{n}^{[2]}}{r_{n}}\right)^{2}-d_{n}\left(\frac{r_{n}^{[2]}}{r_{n+1}} \right)^{2}, \\ \tau_{n} &=& e_{n}\frac{||P_{n}||_{\mu }^{2}}{||P_{n-1}^{[2]}||_{[2]}^{2}} =\left(\frac{r_{n-1}^{[2]}}{r_{n}}\right)^{2}e_{n}>0, \end{array}$$

where

$$\begin{array}{@{}rcl@{}} d_{n} &=&\frac{r_{n+1}}{r_{n+2}}\frac{p_{n+2}(c)}{p_{n+1}(c)}+\frac{r_{n}}{ r_{n+1}}\frac{p_{n}(c)}{p_{n+1}(c)}\frac{K_{n+1}(c,\;c)}{K_{n}(c,\;c)}, \\ e_{n} &=&\frac{\Vert P_{n+1}\Vert_{\mu }^{2}}{\Vert P_{n}\Vert_{\mu }^{2}} \frac{K_{n+1}(c,\;c)}{K_{n}(c,\;c)}=\left(\frac{r_{n}}{r_{n+1}}\right)^{2} \frac{K_{n+1}(c,\;c)}{K_{n}(c,\;c)}>0. \end{array}$$

Hence, we have proved the following

Proposition 1

The monic sequence \(\{P_{n}^{[2]}(x)\}_{n\geq 0}\) satisfies the three-term recurrence relation

$$x P_{n}^{[2]}(x)=P_{n+1}^{[2]}(x)+\kappa_{n}P_{n}^{[2]}(x)+\tau_{n}P_{n-1}^{[2]}(x),\quad n\geq 0,$$

with \(P_{-1}^{[2]}(x)=0\), \(P_{0}^{[2]}(x)=1\), and

$$\begin{array}{@{}rcl@{}} \kappa_{n} &=&\left(\beta_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}}\right) e_{n}\left(\frac{r_{n}^{[2]}}{r_{n}}\right)^{2}-d_{n}\left(\frac{ r_{n}^{[2]}}{r_{n+1}}\right)^{2}, \\ \tau_{n} &=&\left(\frac{r_{n-1}^{[2]}}{r_{n+1}}\right)^{2}\frac{ K_{n+1}(c\;,c)}{K_{n}(c,\;c)}>0, \end{array}$$

where, taking into account the explicit expressions for dn and en given in (12), we also have

$$\begin{array}{@{}rcl@{}} d_{n} &=&\frac{r_{n+1}}{r_{n+2}}\frac{p_{n+2}(c)}{p_{n+1}(c)}+\frac{r_{n}}{ r_{n+1}}\frac{p_{n}(c)}{p_{n+1}(c)}\frac{K_{n+1}(c,\;c)}{K_{n}(c,\;c)}, \\ e_{n} &=&\left(\frac{r_{n}}{r_{n+1}}\right)^{2}\frac{K_{n+1}(c,\;c)}{ K_{n}(c,\;c)}>0. \end{array}$$

Although all the above coefficients are well known in the literature, for the sake of completeness we will now present as a Corollary their corresponding version for the case of the three-term recurrence relation satisfied by the 2 −iterated Christoffel perturbed orthonormal polynomials of the sequence \(\{p_{n}^{[2]}(x)\}_{n \geq 0}\), since we will use them in Section 7 for the computation of the entries of the Jacobi matrix J[2]. Observe that the orthonormal version of Proposition 1 is

$$p_{n+1}^{[2]}(x)=\left(x-\kappa_{n}\right) \frac{p_{n}^{[2]}(x)}{ r_{n}^{[2]}/r_{n+1}^{[2]}}-\tau_{n}\frac{p_{n-1}^{[2]}(x)}{ r_{n-1}^{[2]}/r_{n+1}^{[2]}},\quad n\geq 0,$$

and, according to [13, Th. 1.29, p. 12–13],

$$p_{n+1}^{[2]}(x)=\left(x-\kappa_{n}\right) \frac{p_{n}^{[2]}(x)}{\sqrt{ \tau_{n+1}}}-\tau_{n}\frac{p_{n-1}^{[2]}(x)}{\sqrt{\tau_{n+1}\tau_{n}}} ,\quad n\geq 0,$$

so we can conclude that

$$\sqrt{\tau_{n+1}}=\frac{r_{n}^{[2]}}{r_{n+1}^{[2]}},\quad \sqrt{\tau_{n+1}\tau_{n}}=\frac{r_{n-1}^{[2]}}{r_{n+1}^{[2]}}=\frac{r_{n}^{[2]}}{ r_{n+1}^{[2]}}\frac{r_{n-1}^{[2]}}{r_{n}^{[2]}}.$$

Therefore,

$$\tau_{n}=\left(\frac{r_{n-1}^{[2]}}{r_{n+1}}\right)^{2}\frac{K_{n+1}(c,c) }{K_{n}(c,c)}=\left(\frac{r_{n-1}^{[2]}}{r_{n}^{[2]}}\right)^{2}.$$

As a consequence,

$$\begin{array}{@{}rcl@{}} \frac{K_{n+1}(c,c)}{K_{n}(c,c)} &=&\left(\frac{r_{n-1}^{[2]}}{r_{n}^{[2]}} \right)^{2}\left(\frac{r_{n+1}}{r_{n-1}^{[2]}}\right)^{2}=\left(\frac{ r_{n+1}}{r_{n}^{[2]}}\right)^{2}, \\ e_{n} &=&\left(\frac{r_{n}}{r_{n}^{[2]}}\right)^{2}>0, \\ d_{n} &=&\frac{r_{n+1}}{r_{n+2}}\frac{p_{n+2}(c)}{p_{n+1}(c)}+\left(\frac{ r_{n}}{r_{n}^{[2]}}\right)^{2}\frac{r_{n+1}}{r_{n}}\frac{p_{n}(c)}{ p_{n+1}(c)}. \end{array}$$
(17)

Replacing in κn these alternative expressions for en and dn we have

$$\kappa_{n}=\left(\beta_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}}\right) e_{n}\left(\frac{r_{n}^{[2]}}{r_{n}}\right)^{2}-d_{n}\left(\frac{ r_{n}^{[2]}}{r_{n+1}}\right)^{2},$$
$$\frac{d_{n-1}}{e_{n-1}}=\frac{\frac{r_{n}}{r_{n+1}}\frac{p_{n+1}(c)}{p_{n}(c) }+\left(\frac{r_{n-1}}{r_{n-1}^{[2]}}\right)^{2}\frac{r_{n}}{r_{n-1}}\frac{ p_{n-1}(c)}{p_{n}(c)}}{\left(\frac{r_{n-1}}{r_{n-1}^{[2]}}\right)^{2}} =\left(\frac{r_{n-1}^{[2]}}{r_{n-1}}\frac{r_{n-1}^{[2]}}{r_{n-1}}\frac{r_{n} }{r_{n+1}}\frac{p_{n+1}(c)}{p_{n}(c)}+\frac{r_{n}}{r_{n-1}}\frac{p_{n-1}(c)}{p_{n}(c)}\right).$$

Therefore,

$$\begin{array}{@{}rcl@{}} \kappa_{n} &=&\beta_{n}+\gamma_{n}\left(\frac{r_{n-1}^{[2]}}{r_{n-1}} \frac{r_{n-1}^{[2]}}{r_{n-1}}\frac{r_{n}}{r_{n+1}}\frac{p_{n+1}(c)}{p_{n}(c)} +\frac{r_{n}}{r_{n-1}}\frac{p_{n-1}(c)}{p_{n}(c)}\right) \\ &&\qquad \qquad \qquad -\left(\frac{r_{n}^{[2]}}{r_{n+1}}\frac{r_{n}^{[2]}}{ r_{n+2}}\frac{p_{n+2}(c)}{p_{n+1}(c)}+\frac{r_{n}}{r_{n+1}}\frac{p_{n}(c)}{ p_{n+1}(c)}\right) . \end{array}$$

We have then proved the following

Corollary 1

The orthonormal polynomial sequence \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\) satisfies the three-term recurrence relation

$$\sqrt{\tau_{n+1}}p_{n+1}^{[2]}(x)=\left(x-\kappa_{n}\right) p_{n}^{[2]}(x)-\sqrt{\tau_{n}}p_{n-1}^{[2]}(x),\quad n\geq 0,$$
(18)

with \(p_{-1}^{[2]}(x)=0\), \(p_{0}^{[2]}(x)=1/\sqrt {\tau _{0}}\), where

$$\begin{array}{@{}rcl@{}} \kappa_{n} &=&\left({\upbeta}_{n}+\gamma_{n}\frac{d_{n-1}}{e_{n-1}}\right) e_{n}\left(\frac{r_{n}^{[2]}}{r_{n}}\right)^{2}-d_{n}\left(\frac{ r_{n}^{[2]}}{r_{n+1}}\right)^{2} \\ &=&\beta_{n}+\gamma_{n}\left(\left(\frac{r_{n-1}^{[2]}}{r_{n-1}}\right) ^{2}\frac{r_{n}}{r_{n+1}}\frac{p_{n+1}(c)}{p_{n}(c)}+\frac{r_{n}}{r_{n-1}} \frac{p_{n-1}(c)}{p_{n}(c)}\right) \\ &&\qquad \qquad \qquad -\left(\frac{r_{n}^{[2]}}{r_{n+1}}\frac{r_{n}^{[2]}}{ r_{n+2}}\frac{p_{n+2}(c)}{p_{n+1}(c)}+\frac{r_{n}}{r_{n+1}}\frac{p_{n}(c)}{ p_{n+1}(c)}\right) , \\ \tau_{n} &=&\left(\frac{r_{n-1}^{[2]}}{r_{n}^{[2]}}\right)^{2}>0, \end{array}$$

where en, and dn are given in (17).

4 Connection formulas for Sobolev-type and standard orthogonal polynomials

As we have seen in the previous section, the connection formulas are the main tool to study the analytical properties of new families of OPS, in terms of other families of OPS with well-known analytical properties. Indeed, the problem of finding such expressions is called the connection problem, and it is of great importance in this context.

In this section, we present some results of [15], and which will be useful later. We will give some alternative proofs of them. From now on, let us denote by \(\{s_{n}^{M,\;N}\}_{n\geq 0}\), {pn}n≥ 0 the sequences of polynomials orthonormal with respect to (16) and (1), respectively. We will write

$$\begin{array}{@{}rcl@{}} s_{n}^{M,\;N}(x) &=&t_{n}x^{n}+it{lower degree terms},\quad t_{n}>0,\\ p_{n}(x) &=&r_{n}x^{n}+it{lower degree terms},\quad r_{n}>0,\\ p_{n}^{[k]} (x) &=&r_{n}^{[k]}x^{n}+it{lower degree terms},\quad r_{n}^{[k]}>0. \end{array}$$

In the sequel the following notation will be useful. For every \(k\in \mathbb {N}_{0}\), let us define J[k] as the semi-infinite symmetric Jacobi matrix associated with the measure (xc)kdμ, verifying

$$x \boldsymbol{\bar{p}}^{[k]}=\mathbf{J}_{[k]} \boldsymbol{\bar{p}}^{[k]},$$

where \(\boldsymbol {\bar {p}}^{[k]}\) stands for the semi-infinite column vector with orthonormal polynomial entries \(\boldsymbol {\bar {p}} ^{[k]}=[p_{0}^{[k]}(x),p_{1}^{[k]}(x),p_{2}^{[k]}(x),{\ldots } ]^{\intercal }\), being \(\{p^{[k]}(x)\}_{n\geq 0}\) the orthonormal polynomial sequence with respect to the measure (xc)kdμ (10) . One has \(\boldsymbol {\bar {p}}^{[0]}=\boldsymbol {\bar {p}}=[p_{0}(x),p_{1}(x),p_{2}(x),{\ldots } ]^{\intercal }\) being {p(x)}n≥ 0 the orthonormal polynomial sequence with respect to the standard measure μ, and J[0] = J is the corresponding Jacobi matrix.

Next, we will present an expansion of the monic polynomials \(S_{n}^{M,\;N}(x)\) in terms of polynomials Pn(x) orthogonal with respect to μ. When necessary, we refer the reader to [15, Th. 5.1] for alternative proofs to those presented here.

Lemma 1

$$S_{n}^{M,\;N}(x)=P_{n}(x)-M S_{n}^{M,\;N}(c)K_{n-1}(x,c)-N [S_{n}^{M,\;N}]^{\prime}(c)K_{n-1}^{(0,\;1)}(x,c)$$
(19)

where

$$\begin{array}{@{}rcl@{}} S_{n}^{M,N}(c) &=&\frac{\begin{vmatrix} P_{n}(c) & NK_{n-1}^{(0,\;1)}(c,\;c) \\ \lbrack P_{n}]^{\prime}(c) & 1+NK_{n-1}^{(1,\;1)}(c,\;c) \end{vmatrix} }{ \begin{vmatrix} 1+MK_{n-1}(c,\;c) & NK_{n-1}^{(0,\;1)}(c,\;c) \\ MK_{n-1}^{(1,\;0)}(c,\;c) & 1+NK_{n-1}^{(1,\;1)}(c,\;c) \end{vmatrix} }, \end{array}$$
(20)
$$\begin{array}{@{}rcl@{}} \lbrack S_{n}^{M,N}]^{\prime }(c) &=&\frac{ \begin{vmatrix} 1+MK_{n-1}(c,\;c) & P_{n}(c) \\ MK_{n-1}^{(1,\;0)}(c,\;c) & [P_{n}]^{\prime }(c) \end{vmatrix} }{ \begin{vmatrix} 1+MK_{n-1}(c,\;c) & NK_{n-1}^{(0,\;1)}(c,\;c) \\ MK_{n-1}^{(1,\;0)}(c,\;c) & 1+NK_{n-1}^{(1,\;1)}(c,\;c) \end{vmatrix} }. \end{array}$$
(21)

Proof 1

We search for the expansion

$$S_{n}^{M,\;N}(x)=P_{n}(x)+\sum\limits_{j=0}^{n-1}{\varrho}_{n,\;j}P_{j}(x),$$

where

$${\varrho}_{n,\;j}=\frac{{\int}_{E}S_{n}^{M,\;N}(x)P_{j}(x)d\mu }{||P_{j}||_{\mu }^{2}}=-\frac{M S_{n}^{M,\;N}(c)P_{j}(c)}{||P_{j}||_{\mu }^{2}}-\frac{ N [S_{n}^{M,\;N}]^{\prime}(c)[P_{n}]^{\prime}(c)}{||P_{j}||_{\mu }^{2}}.$$

From these coefficients, (19) follows. Next, considering its first derivative with respect to x, and taking x = c we get the following linear system

$$\begin{array}{@{}rcl@{}} P_{n}(c) &=&\left(1+M K_{n-1}(c,\;c)\right) S_{n}^{M,\;N}(c)+N K_{n-1}^{(0,\;1)}(c,\;c)[S_{n}^{M,\;N}]^{\prime }(c), \\ \lbrack P_{n}]^{\prime}(c) &=&M K_{n-1}^{(0,\;1)}(c,\;c)S_{n}^{M,\;N}(c)+\left(1+N K_{n-1}^{(1,\;1)}(c,\;c)\right) [S_{n}^{M,\;N}]^{\prime}(c). \end{array}$$

It is well known , see [1], that a linear system as above has a unique solution if and only if the determinant

$$\left| \begin{array}{cc} 1+MK_{n-1}(c,\;c) & NK_{n-1}^{(0,\;1)}(c,\;c) \\ MK_{n-1}^{(1,\;0)}(c,\;c) & 1+NK_{n-1}^{(1,\;1)}(c,\;c) \end{array} \right| \neq 0$$
(22)

for each fixed \(n\in \mathbb {N}\). Indeed, it is important to point out that (22) is a necessary and sufficient condition for the existence of every Sobobev-type polynomial \(S_{n}^{M,N}(x)\) of n-th degree. Provided that, from the very beginning in (16), we restrict ourselves to the case in which MN ≥ 0, c is outside of the interior of the convex hull of \(E\subseteq \mathbb {R}\), and {Kn(cc)}n≥ 0, \(\{K_{n}^{(0,\;1)}(c,\;c)\}_{n\geq 0}\), and \(\{K_{n}^{(1,\;1)}(c,\;c)\}_{n\geq 0}\) are all of them monotonic non-decreasing sequences (this comes from their definitions (5), (7) and (8)), we next check that (22) holds for each fixed \(n\in \mathbb {N}\).

That determinant can be rewritten as

$$1+MK_{n-1}(c,c)+NK_{n-1}^{(1,1)}(c,c)+MN\left[K_{n-1}(c,c)K_{n-1}^{(1,1)}(c,c)-\left( K_{n-1}^{(0,1)}(c,c)\right)^{2}\right].$$

Under the aforementioned conditions, \(MK_{n-1}(c,c)\) (respectively \(NK_{n-1}^{(1,1)}(c,c)\)) equals zero if, and only if \(M=0\) (respectively \(N=0\)). Next, concerning the term in square brackets above, it is a simple matter to observe the following Cauchy-Schwarz inequality

$$\left( K_{n-1}^{(0,1)}(c,c)\right)^{2}=\left(\sum\limits_{k=0}^{n}\frac{P_{k}(c)P_{k}^{\prime }(c)}{\|P_{k}\|_{\mu }^{2}}\right) ^{2}\leq \left(\sum\limits_{k=0}^{n}\frac{[P_{k}(c)]^{2}}{\|P_{k}\|_{\mu }^{2}}\right) \left(\sum\limits_{k=0}^{n} \frac{[P_{k}^{\prime }(c)]^{2}}{\|P_{k}\|_{\mu }^{2}}\right)=K_{n-1}(c,c)K_{n-1}^{(1,1)} (c,c),$$

so the term in square braquets is always greater or equal than zero. Therefore, we can finally assert that

$$1+MK_{n-1}(c,c)+NK_{n-1}^{(1,1)}(c,c)+MN\left[K_{n-1}(c,c)K_{n-1}^{(1,1)}(c,c)-\left( K_{n-1}^{(0,1)}(c,c)\right)^{2}\right] \geq 1.$$

Hence, solving the above linear system for \(S_{n}^{M,\;N}(c)\) and \([S_{n}^{M,\;N}]^{\prime }(c)\) we obtain (20) and (21).

This completes the proof.

From the above lemma, we can also express \(S_{n}^{M,\;N}(x)\) as follows

$$S_{n}^{M,\;N}(x)=\frac{ \begin{vmatrix} P_{n}(x) & MK_{n-1}(x,c) & NK_{n-1}^{(0,\;1)}(x,c) \\ P_{n}(c) & 1+M K_{n-1}(c,\;c) & N K_{n-1}^{(0,\;1)}(c,\;c) \\ \lbrack P_{n}]^{\prime }(c) & M K_{n-1}^{(0,\;1)}(c,\;c) & 1+N K_{n-1}^{(1,\;1)}(c,\;c) \end{vmatrix} }{ \begin{vmatrix} 1+M K_{n-1}(c,\;c) & N K_{n-1}^{(0,\;1)}(c,\;c) \\ M K_{n-1}^{(0,\;1)}(c,\;c) & 1+N K_{n-1}^{(1,\;1)}(c,\;c) \end{vmatrix} }.$$

In terms of the orthonormal polynomials, (19) becomes

$$s_{n}^{M,\;N}(x)=\frac{t_{n}}{r_{n}}p_{n}(x)-M s_{n}^{M,\;N}(c)K_{n-1}(x,c)-N [s_{n}^{M,\;N}]^{\prime}(c)K_{n-1}^{(0,\;1)}(x,c).$$
(23)

As a direct consequence of Lemma 1, we get the following result concerning the norm of the Sobolev-type polynomials \(S_{n}^{M,\;N}\)  

Lemma 2

For \(c\in \mathbb {R}_{+}\) the norm of the monic Sobolev-type polynomials \(S_{n}^{M,\;N}\), orthogonal with respect to (16) is

$$\frac{1}{{t_{n}^{2}}}=||S_{n}^{M,\;N}||_{S}^{2}=||P_{n}||_{\mu}^{2}+M S_{n}^{M,\;N}(c)P_{n}(c)+N [S_{n}^{M,\;N}]^{\prime}(c)[P_{n}]^{\prime}(c).$$

Proof 2

From (19), we have

$$S_{n}^{M,\;N}(x)=P_{n}(x)-M S_{n}^{M,\;N}(c)K_{n-1}(x,c)-N [S_{n}^{M,\;N}]^{\prime}(c)K_{n-1}^{(0,\;1)}(x,c)$$

and according to (16) we get

$$\langle S_{n}^{M,\;N}(x),S_{n}^{M,\;N}(x)\rangle_{S}=\langle S_{n}^{M,\;N}(x),P_{n}(x)\rangle +M S_{n}^{M,\;N}(c)P_{n}(c)+N [S_{n}^{M,\;N}]^{\prime}(c)[P_{n}]^{\prime}(c).$$

This completes the proof.

Next, we represent the Sobolev-type orthogonal polynomials in terms of the polynomial kernels associated with the sequence of orthonormal polynomials {pn(x)}n≥ 0 and its derivatives. Another proof of this result can be found in [15, Prop. 5.6, p. 115].

Lemma 3

The sequence of Sobolev-type orthonormal polynomials \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\) can be expressed as

$$\begin{array}{@{}rcl@{}} s_{n}^{M,\;N}(x) &=&\alpha_{n+1,\;n}p_{n+1}(x)+\alpha_{n,\;n}p_{n}(x) \\ &&-M s_{n}^{M,\;N}(c)K_{n+1}(x,c)-N [s_{n}^{M,\;N}]^{\prime }(c)K_{n+1}^{(0,\;1)}(x,c), \end{array}$$
(24)

where

$$\begin{array}{@{}rcl@{}} \alpha_{n+1,n} &=& M s_{n}^{M,\;N}(c)p_{n+1}(c)+N [s_{n}^{M,\;N}]^{\prime}(c)[p_{n+1}]^{\prime }(c), \\ \alpha_{n,n} &=&\frac{t_{n}}{r_{n}}+M s_{n}^{M,\;N}(c)p_{n}(c)+N [s_{n}^{M,\;N}]^{\prime }(c)[p_{n}]^{\prime}(c). \end{array}$$

Proof 3

From (9)

$$\begin{array}{@{}rcl@{}} K_{n-1}(x,c) &=& K_{n+1}(x,c)-p_{n+1}(x)p_{n+1}(c)-p_{n}(x)p_{n}(c), \\ K_{n-1}^{(0,\;1)}(x,c) &=&K_{n+1}^{(0,\;1)}(x,c)-p_{n+1}(x)[p_{n+1}]^{\prime}(c)-p_{n}(x)[p_{n}]^{\prime}(c). \end{array}$$

Substituting the above relations into (23) yields

$$\begin{array}{@{}rcl@{}} s_{n}^{M,\;N}(x) &=&\frac{t_{n}}{r_{n}}p_{n}(x)-M s_{n}^{M,\;N}(c)\left(K_{n+1}(x,c)-p_{n+1}(x)p_{n+1}(c)-p_{n}(x)p_{n}(c)\right) \\ &&-N [s_{n}^{M,\;N}]^{\prime}(c)\left(K_{n+1}^{(0,\;1)}(x,c)-p_{n+1}(x)[p_{n+1}]^{\prime }(c)-p_{n}(x)[p_{n}]^{\prime}(c)\right) \\ &=&\left[ M s_{n}^{M,\;N}(c)p_{n+1}(c)+N [s_{n}^{M,\;N}]^{\prime }(c)[p_{n+1}]^{\prime }(c)\right] p_{n+1}(x) \\ &&+\left[\frac{t_{n}}{r_{n}}+M s_{n}^{M,\;N}(c)p_{n}(c)+N [s_{n}^{M,\;N}]^{ \prime }(c)[p_{n}]^{\prime }(c)\right] p_{n}(x) \\ &&-M s_{n}^{M,\;N}(c)K_{n+1}(x,c)-N [s_{n}^{M,\;N}]^{\prime }(c)K_{n+1}^{(0,\;1)}(x,c) \end{array}$$

This completes the proof.

Next, we expand the polynomials {pn(x)}n≥ 0 in terms of the polynomials \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\). This result is already addressed in [15, Prop. 5.7, p.116] as well as in [11] but we include here an alternative proof.

Lemma 4

The sequence of polynomials {pn(x)}n≥ 0, orthonormal with respect to dμ, is expressed in terms of the 2 − iterated orthonormal polynomials HCode \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\) as follows

$$p_{n}(x)=\xi_{n,\;n}p_{n}^{[2]}(x)+\xi_{n-1,\;n}p_{n-1}^{[2]}(x)+\xi_{n-2,\;n}p_{n-2}^{[2]}(x),$$

where

$$\begin{array}{@{}rcl@{}} \xi_{n,\;n} &=&\frac{r_{n}}{r_{n}^{[2]}}=\frac{r_{n}}{r_{n+1}}\left(\frac{ K_{n+1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}=e_{n}^{1/2}, \\ \xi_{n-1,\;n} &=&-d_{n-1}\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2},\\ \xi_{n-2,\;n} &=&\frac{r_{n-1}}{r_{n}}\left(\frac{K_{n-2}(c,\;c)}{K_{n-1}(c,\;c)} \right)^{1/2}=\frac{r_{n-1}^{2}}{r_{n}r_{n+1}}e_{n-2}^{1/2} . \end{array}$$

Proof 4

Taking into account (12), (15) and (14 ), for the first coefficient, we immediately have

$$\begin{array}{@{}rcl@{}} \xi_{n,\;n} &=&\langle p_{n}(x),p_{n}^{[2]}(x)\rangle_{\lbrack 2]}=\langle p_{n}(x),(x-c)^{2}p_{n}^{[2]}(x)\rangle \\ &=&\frac{r_{n}}{r_{n}^{[2]}}=\frac{r_{n}}{r_{n+1}}\left(\frac{K_{n+1}(c,\;c)}{ K_{n}(c,\;c)}\right)^{1/2}=e_{n}^{1/2}. \end{array}$$

For the second coefficient, from (14), we have

$$\begin{array}{@{}rcl@{}} \xi_{n-1,\;n} &=&\langle p_{n}(x),p_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}=\langle p_{n}(x),(x-c)^{2}p_{n-1}^{[2]}(x)\rangle \\ &=&\langle p_{n}(x),-d_{n-1}\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}p_{n}(x)\rangle =-d_{n-1}\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}. \end{array}$$

Finally, for the last coefficient, we get

$$\begin{array}{@{}rcl@{}} \xi_{n-2,\;n} &=&\langle p_{n}(x),p_{n-2}^{[2]}(x)\rangle_{\lbrack 2]}=\langle p_{n}(x),(x-c)^{2}p_{n-2}^{[2]}(x)\rangle \\ &=&\frac{r_{n-2}^{[2]}}{r_{n}}=\frac{r_{n-1}}{r_{n}}\left(\frac{K_{n-2}(c,\;c) }{K_{n-1}(c,\;c)}\right)^{1/2}=\frac{r_{n-1}^{2}}{r_{n}r_{n+1}}e_{n-2}^{1/2}. \end{array}$$

This completes the proof.

Next, let us obtain a third representation for the Sobolev-type OPS in terms of the polynomials orthonormal with respect to (xc)2dμ. This expression will be very useful to find the connection of these polynomials with the matrix orthogonal polynomials, and we include the proof for the convenience of the reader.

Theorem 1

Let \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\) be the sequence Sobolev-type polynomials orthonormal with respect to (16), and let \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\) be the sequence of polynomials orthonormal with respect to the inner product (10) with k = 2. Then, the following expression holds

$$s_{n}^{M,\;N}(x)=\gamma_{n,\;n}p_{n}^{[2]}(x)+\gamma_{n-1,\;n}p_{n-1}^{[2]}(x)+\gamma_{n-2,\;n}p_{n-2}^{[2]}(x),$$
(25)

where,

$$\gamma_{n,\;n}=\frac{t_{n}}{r_{n}^{[2]}}=\frac{t_{n}}{r_{n+1}}\left(\frac{ K_{n+1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2},$$
$$\gamma_{n-1,\;n}=-\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}$$
$$\times \left(d_{n-1}\frac{t_{n}}{r_{n}}+e_{n-1}\frac{r_{n}}{r_{n-1}}\left[M s_{n}^{M,\;N}(c)p_{n-1}(c)+N [s_{n}^{M,\;N}]^{\prime}(c)[p_{n}]^{\prime}(c) \right] \right) ,$$
$$\gamma_{n-2,\;n}=\frac{r_{n-1}}{t_{n}}\left(\frac{K_{n-2}(c,\;c)}{K_{n-1}(c,\;c)} \right)^{1/2}.$$

Proof 5

For γnn, matching the leading coefficients of \(s_{n}^{M,\;N}(x)\) and \(p_{n}^{[2]}(x)\), it is a straightforward consequence to see that

$$\gamma_{n,n}=\langle s_{n}^{M,\;N}(x),p_{n}^{[2]}(x)\rangle_{\lbrack 2]}= \frac{t_{n}}{r_{n}^{[2]}}.$$

Next, from (15),

$$\gamma_{n,\;n}=\frac{t_{n}}{r_{n}^{[2]}}=\frac{t_{n}}{r_{n+1}}\left(\frac{ K_{n+1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}.$$

For γn− 1,n we need some extra work. From (24), we have

$$\begin{array}{@{}rcl@{}} \gamma_{n-1,\;n} &=&\langle s_{n}^{M,\;N}(x),p_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}={\int}_{E}s_{n}^{M,\;N}(x)(x-c)^{2}p_{n-1}^{[2]}(x)d\mu \\ &=&{\int}_{E}s_{n}^{M,\;N}(x)\left[ -d_{n-1}\frac{r_{n-1}^{[2]}}{r_{n}} p_{n}(x)+e_{n-1}\frac{r_{n-1}^{[2]}}{r_{n-1}}p_{n-1}(x)\right] d\mu \\ &=&-d_{n-1}\frac{t_{n}}{r_{n}}\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}+e_{n-1}\frac{r_{n}}{r_{n-1}}\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)} \right)^{1/2}{\int}_{E}s_{n}^{M,\;N}(x)p_{n-1}(x)d\mu . \end{array}$$

The last integral can be computed using (23)

$${\int}_{E}s_{n}^{M,\;N}(x)p_{n-1}(x)d\mu =$$
$$\begin{array}{@{}rcl@{}} &&{\int}_{E}\left(-M s_{n}^{M,\;N}(c)K_{n-1}(x,c)-N [s_{n}^{M,\;N}]^{\prime }(c)K_{n-1}^{(0,\;1)}(x,c)\right) p_{n-1}(x)d\mu \\ &=&-M s_{n}^{M,\;N}(c)p_{n-1}(c)-N [s_{n}^{M,\;N}]^{\prime }(c)[p_{n}]^{\prime }(c). \end{array}$$

Thus,

$$\gamma_{n-1,\;n}=-\left(\frac{K_{n-1}(c,\;c)}{K_{n}(c,\;c)}\right)^{1/2}$$
$$\times \left(d_{n-1}\frac{t_{n}}{r_{n}}+e_{n-1}\frac{r_{n}}{r_{n-1}}\left[ M s_{n}^{M,\;N}(c)p_{n-1}(c)+N [s_{n}^{M,\;N}]^{\prime }(c)[p_{n}]^{\prime }(c) \right] \right) .$$

Finally, for the last coefficient, we have

$$\begin{array}{@{}rcl@{}} \gamma_{n-2,\;n} &=& \langle s_{n}^{M,\;N}(x),p_{n-2}^{[2]}(x)\rangle_{\lbrack 2]}=\langle s_{n}^{M,\;N}(x),(x-c)^{2}p_{n-2}^{[2]}(x)\rangle_{S} \\ &=&t_{n}r_{n-2}^{[2]}\langle S_{n}^{M,\;N}(x),(x-c)^{2}P_{n-2}^{[2]}(x)\rangle_{S} \\ &=&t_{n}r_{n-2}^{[2]}||S_{n}^{M,\;N}||_{S}^{2}=\frac{r_{n-2}^{[2]}}{t_{n}}= \frac{r_{n-1}}{t_{n}}\left(\frac{K_{n-2}(c,\;c)}{K_{n-1}(c,\;c)}\right)^{1/2}. \end{array}$$

This completes the proof.

5 The five-term recurrence relation

In this section, we will obtain the five-term recurrence relation that the sequence of Sobolev-type orthonormal polynomials \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\) satisfies. We use orthonormal polynomials because all the matrices associated with the multiplication operators we are dealing with are symmetric. Later on, we will derive an interesting relation between the five diagonal matrix H associated with the multiplication operator by (xc)2 in terms of the orthonormal basis \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\), and the tridiagonal Jacobi matrix J[2] associated with the three-term recurrence relation satisfied by the 2 −iterated orthonormal polynomials \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\).

To do that, we will use the following remarkable fact

Proposition 2

The multiplication operator by (xc)2 is a symmetric operator with respect to the discrete Sobolev inner product (16). In other words, for any \(p(x), q(x)\in \mathbb {P}\), it satisfies

$$\langle (x-c)^{2}p(x),q(x)\rangle_{S}=\langle p(x),(x-c)^{2}q(x)\rangle_{S}.$$
(26)

Proof 6

The proof is a straightforward consequence of (16).

Next, we will obtain the coefficients of the aforementioned five-term recurrence relation. Let consider the Fourier expansion of \((x-c)^{2}s_{n}^{M,\;N}(x)\) in terms of \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\)

$$(x-c)^{2}s_{n}^{M,\;N}(x)=\sum\limits_{k=0}^{n+2}\rho_{k,\;n}s_{k}^{M,\;N}(x),$$
(27)

where

$$\rho_{k,n}=\left\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{k}^{M,\;N}(x)\right\rangle_{S},\quad k=0,{\ldots} ,n+2.$$

From (26),

$$\rho_{k,n}=\left\langle s_{n}^{M,\;N}(x),(x-c)^{2}s_{k}^{M,\;N}(x)\right\rangle_{S},\quad k=0,{\ldots} ,n+2.$$

Hence, ρkn = 0 for k = 0,…, n − 3. Taking into account that

$$\lbrack (x-c)^{2}s_{n}^{M,\;N}(x)]|_{x=c}=[(x-c)^{2}s_{n}^{M,\;N}(x)]^{\prime }|_{x=c}=0,$$

and using [24, Th. 1, p. 174] we get

$$\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{k}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{k}^{M,\;N}(x)\rangle_{\lbrack 2]}.$$
(28)

Notice that

$$\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{k}^{M,\;N}(x)\rangle_{S}=\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{k}^{M,\;N}(x)\rangle .$$
(29)

Next, using the connection formula (25), we have

$$\begin{array}{@{}rcl@{}} \rho_{n+2,\;n} &=&\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{n+2}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{n+2}^{M,\;N}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n,\;n}\gamma_{n,\;n+2}=\frac{t_{n}}{t_{n+2}}, \end{array}$$
$$\begin{array}{@{}rcl@{}} \rho_{n+1,\;n} &=&\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{n+1}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{n+1}^{M,\;N}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n,\;n}\gamma_{n,\;n+1} \langle p_{n}^{[2]}(x),p_{n}^{[2]}(x)\rangle_{\lbrack 2]}+\gamma_{n-1,\;n}\gamma_{n-1,\;n+1} \langle p_{n-1}^{[2]}(x),p_{n-1}^{[2]}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n,\;n}\gamma_{n,\;n+1}+\gamma_{n-1,\;n}\gamma_{n-1,\;n+1}, \end{array};$$
$$\begin{array}{@{}rcl@{}} \rho_{n,\;n} &=&\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{n}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{n}^{M,\;N}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n,\;n}^{2} \langle p_{n}^{[2]}(x),p_{n}^{[2]}(x)\rangle_{\lbrack 2]}+\gamma_{n-1,\;n}^{2} \langle p_{n-1}^{[2]}(x),p_{n-1}^{[2]}(x)\rangle _{\lbrack 2]}+\gamma_{n-2,\;n}^{2} \langle p_{n-2}^{[2]}(x),p_{n-2}^{[2]}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n,\;n}^{2}+\gamma_{n-1,\;n}^{2}+\gamma_{n-2,\;n}^{2}, \end{array}$$
$$\begin{array}{@{}rcl@{}} \rho_{n-1,\;n} &=&\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{n-1}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{n-1}^{M,\;N}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n-1,\;n}\gamma_{n-1,\;n-1} \langle p_{n-1}^{[2]}(x),p_{n-1}^{[2]}(x)\rangle_{\lbrack 2]}+\gamma_{n-2,\;n}\gamma_{n-2,\;n-1} \langle p_{n-2}^{[2]}(x),p_{n-2}^{[2]}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n-1,\;n-1}\gamma_{n-1,\;n}+\gamma_{n-2,\;n}\gamma_{n-2,\;n-1}=\rho_{n,\;n-1} , \end{array}$$
$$\begin{array}{@{}rcl@{}} \rho_{n-2,\;n} &=&\langle (x-c)^{2}s_{n}^{M,\;N}(x),s_{n-2}^{M,\;N}(x)\rangle_{S}=\langle s_{n}^{M,\;N}(x),s_{n-2}^{M,\;N}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n-2,\;n-2}\gamma_{n-2,\;n} \langle p_{n-2}^{[2]}(x),p_{n-2}^{[2]}(x)\rangle_{\lbrack 2]} \\ &=&\gamma_{n-2,\;n-2}\gamma_{n-2,\;n}=\frac{t_{n-2}}{t_{n}}. \end{array}$$

Introducing the following notation

$$\rho_{n-2,\;n}=a_{n},\quad \rho_{n-1,\;n}=b_{n},\quad \rho_{n,\;n}=c_{n} ,$$

(27) reads as the following

Theorem 2

The sequence \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\) satisfies a five-term recurrence relation as follows

$$(x-c)^{2}s_{n}^{M,\;N}(x)=$$
$$a_{n+2}s_{n+2}^{M,\;N}(x)+b_{n+1}s_{n+1}^{M,\;N}(x)+c_{n}s_{n}^{M,\;N}(x)+b_{n}s_{n-1}^{M,\;N}(x)+a_{n}s_{n-2}^{M,\;N}(x),\quad n\geq 0,$$
(30)

where, by convention,

$$s_{-2}^{M,\;N}(x)=s_{-1}^{M,\;N}(x)=0.$$

6 A matrix approach

In this section, we will deduce an interesting relation between the five diagonal matrix H, associated with the multiplication operator by (xc)2, associated with the Sobolev-type orthonormal polynomials and the Jacobi matrix J[2] associated with the 2 −iterated orthonormal polynomials \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\).

First, we deal with the matrix representation of (30)

$$(x-c)^{2}\boldsymbol{\bar{s}}^{M,\;N}=\boldsymbol{H \bar{s}}^{M,\;N},$$
(31)

where H is the five diagonal semi-infinite symmetric matrix

$$\mathbf{H=} \begin{bmatrix} c_{0} & b_{1} & a_{2} & 0 & {\cdots} \\ b_{1} & c_{1} & b_{2} & a_{3} & {\cdots} \\ a_{2} & b_{2} & c_{2} & b_{3} & {\ddots} \\ 0 & a_{3} & b_{3} & c_{3} & {\ddots} \\ {\vdots} & {\vdots} & {\ddots} & {\ddots} & \ddots \end{bmatrix} ,$$
(32)

and \(\mathbf {\bar {s}}^{M,\;N}=[s_{0}^{M,\;N}(x),s_{1}^{M,\;N}(x),s_{2}^{M,\;N}(x), {\ldots } ]^{\intercal }\).

Proposition 3

From (25), we get

$$\boldsymbol{\bar{s}}^{M,\;N}=\boldsymbol{T \bar{p}}^{[2]}$$
(33)

where T is the lower triangular, semi-infinite, and nonsingular matrix with positive diagonal entries

$$\mathbf{T}= \begin{bmatrix} \gamma_{0,\;0} & & & & \\ \gamma_{0,\;1} & \gamma_{1,\;1} & & & \\ \gamma_{0,\;2} & \gamma_{1,\;2} & \gamma_{2,\;2} & & \\ \gamma_{0,\;3} & \gamma_{1,\;3} & \gamma_{2,\;3} & \gamma_{3,\;3} & \\ & & {\ddots} & {\ddots} & \ddots \end{bmatrix}$$
(34)

and \(\boldsymbol {\bar {p}}^{[2]}=[p_{0}^{[2]}(x),p_{1}^{[2]}(x),p_{2}^{[2]}(x), {\ldots } ]^{\intercal }\).

We will denote by J the Jacobi matrix associated with the orthonormal sequence {pn(x)}n≥ 0, with respect to the measure dμ. As a consequence, we have

$$x \boldsymbol{\bar{p}}=\mathbf{J} \boldsymbol{\bar{p}}.$$

Let J[2] be the Jacobi matrix associated with the 2 −iterated OPS \(\{p_{n}^{[2]}(x)\}_{n\geq 0}\). Notice that from

$$x \boldsymbol{\bar{p}}^{[2]}=\mathbf{J}_{[2]} \boldsymbol{\bar{p}}^{[2]},$$

we get

$$\left(x-c\right)^{2}\boldsymbol{\bar{p}}^{[2]}=\left(\mathbf{J}_{[2]}-c \mathbf{I}\right)^{2}\boldsymbol{\bar{p}}^{[2]}.$$
(35)

Starting with (JcI), and assuming c is located in the left hand side of supp(μ), all their leading principal submatrices are positive definite, so we get the following Cholesky factorization

$$\mathbf{J}-c\mathbf{I=LL}^{\intercal}.$$
(36)

Here, L is a lower bidiagonal matrix with positive diagonal entries. From [3], we know

$$\mathbf{J}_{[1]}-c\mathbf{I=\mathbf{L}^{\intercal}\mathbf{L}=\mathbf{L}_{1} \mathbf{L}_{1}^{\intercal}},$$
(37)

where L1 is a lower bidiagonal matrix with positive diagonal entries. Notice that if c is located in the right hand side of the support, then you must deal with the Cholesky factorization of the matrix cIJ.

Next, we show that the five diagonal matrix H associated with (30) can be given in terms of the five diagonal matrix \((\mathbf {J}_{[2]}-c\mathbf {I)}^{2}\). Combining (31) with (33), we get

$$\mathbf{T}(x-c)^{2}\boldsymbol{\bar{p}}^{[2]}=\boldsymbol{HT \bar{p}}^{[2]}.$$
(38)

Substituting (35) into (38)

$$\mathbf{T}\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}\boldsymbol{ \bar{p}} ^{[2]}=\boldsymbol{HT \bar{p}}^{[2]}.$$

Hence, we state the following

Proposition 4

The semi-infinite five diagonal matrix H can be obtained from the matrix \(\left (\mathbf {J}_{[2]}-c\mathbf {I}\right )^{2}\) as follows

$$\mathbf{H}=\mathbf{T}\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}\mathbf{T}^{-1}.$$

Next, we repeat the above process commuting the order of factors in \(\mathbf {L}_{1}\mathbf {L}_{1}^{\intercal }\), Thus,

$$\mathbf{L}_{1}^{\intercal}\mathbf{L}_{1}=\mathbf{J}_{[2]}-c\mathbf{I}.$$
(39)

From (37), we have \(\mathbf {L}_{1}=\mathbf {L}^{\intercal }\mathbf {LL}_{1}^{-\intercal }\), and replacing this expression as above, it yields

$$\mathbf{J}_{[2]}-cI=\mathbf{L}_{1}^{\intercal}\left(\mathbf{L}^{\intercal} \mathbf{LL}_{1}^{-\intercal }\right) =\left(\mathbf{L}_{1}^{\intercal } \mathbf{L}^{\intercal }\right) \left(\mathbf{LL}_{1}^{-\intercal }\right) =\left(\mathbf{LL}_{1}\right)^{\intercal}\left(\mathbf{LL} _{1}^{-\intercal}\right) =\mathbf{RQ}.$$

Notice that \(\mathbf {R}=\left (\mathbf {LL}_{1}\right )^{\intercal }\) is an upper triangular matrix, with positive diagonal entries because L and L1 are lower bidiagonal matrices. Now, for the matrix \(\mathbf {Q}=\mathbf {LL}_{1}^{-\intercal }\), we have

$$\begin{array}{@{}rcl@{}} \mathbf{QQ}^{\intercal} &=&\mathbf{LL}_{1}^{-\intercal }\left(\mathbf{LL} _{1}^{-\intercal }\right)^{\intercal }=\mathbf{LL}_{1}^{-\intercal}\left(\mathbf{L}_{1}^{-1}\mathbf{L}^{\intercal }\right) \\ &=&\mathbf{L}\left(\mathbf{L}_{1}^{-\intercal }\mathbf{L}_{1}^{-1}\right) \mathbf{L}^{\intercal}=\mathbf{L}\left(\mathbf{L}_{1}\mathbf{L} _{1}^{\intercal }\right)^{-1}\mathbf{L}^{\intercal}. \end{array}$$

Next, from (37), \(\mathbf {L}_{1}\mathbf {L}_{1}^{\intercal }=\mathbf {L}^{\intercal }\mathbf {L}.\) Thus,

$$\mathbf{QQ}^{\intercal}=\mathbf{L}\left(\mathbf{L}^{\intercal}\mathbf{L} \right)^{-1}\mathbf{L}^{\intercal }=\mathbf{LL}^{-1}\mathbf{L}^{-\intercal } \mathbf{L}^{\intercal}=\mathbf{I},$$

as well as

$$\mathbf{Q}^{\intercal}\mathbf{Q}=\left(\mathbf{L}_{1}^{-1}\mathbf{L} ^{\intercal }\right) \left(\mathbf{L}\mathbf{L}_{1}^{-\intercal }\right) = \mathbf{L}_{1}^{-1}\left(\mathbf{L}^{\intercal }\mathbf{L}\right) \mathbf{L} _{1}^{-\intercal }=\mathbf{L}_{1}^{-1}\left(\mathbf{L}_{1}\mathbf{L} _{1}^{\intercal }\right) \mathbf{L}_{1}^{-\intercal }=\mathbf{I}.$$

This means that Q is an orthogonal matrix. Thus, we have proved the following

Proposition 5

The positive definite matrix J[2]cI can be factorised as follows

$$\mathbf{J}_{[2]}-c\mathbf{I}=\mathbf{RQ},$$
(40)

where R is an upper triangular matrix, and Q is an orthogonal matrix, i.e. \(\mathbf {QQ}^{\intercal }=\mathbf {Q}^{\intercal } \mathbf {Q}=\mathbf {I}\).

Notice that the above result has been also proved in [12] but the fact that also \(\mathbf {QQ}^{\intercal }=\mathbf {I}\) holds is not proved therein.

Taking into account the previous result, we come back to (36) to observe

$$\mathbf{J}-c\mathbf{I}=\mathbf{LL}^{\intercal }=\mathbf{L\left(\mathbf{L}_{1}^{-\intercal }\mathbf{L}_{1}^{\intercal}\right) L}^{\intercal}=\left(\mathbf{L\mathbf{L}_{1}^{-\intercal }}\right) \left(\mathbf{\mathbf{L}_{1}^{\intercal }L}^{\intercal}\right) =\left(\mathbf{L\mathbf{L} _{1}^{-\intercal }}\right) \left(\mathbf{L\mathbf{L}_{1}}\right)^{\intercal}=\mathbf{QR}.$$

Thus, we can summarize the above as follows

Proposition 6

Let J be the symmetric Jacobi matrix such that

$$x \boldsymbol{\bar{p}}=\mathbf{J} \boldsymbol{\bar{p}}.$$

If \(\boldsymbol {\bar {p}=}[p_{0}(x),p_{1}(x),p_{2}(x),\ldots ]^{\intercal }\) is the infinite vector associated with the orthonormal polynomial sequence with respect to dμ and we assume pn(c)≠ 0 for n ≥ 1, then the following factorization

$$\mathbf{J}-c\mathbf{I}=\mathbf{QR}$$

holds. Here, R is an upper triangular matrix, and Q is an orthogonal matrix , i.e. \(\mathbf {QQ}^{\intercal }=\mathbf {Q}^{\intercal } \mathbf {Q}=\mathbf {I}\). Under these conditions,

$$\mathbf{RQ}=\mathbf{J}_{[2]}-c\mathbf{I},$$

where J[2] is the symmetric Jacobi matrix such that \(x \boldsymbol {\bar {p}}^{[2]}=\mathbf {J}^{[2]} \boldsymbol {\bar {p}}^{[2]}\), where \(\boldsymbol {\bar {p}}^{[2]}\) is the infinite vector associated with the orthonormal polynomial sequence with respect to (xc)2dμ.

Observe that this is an alternative proof of Theorem 3.3 in [4].

Since J[2] is a symmetric matrix, from (40) and \(\mathbf {QQ}^{\intercal }=\mathbf {I}\), we easily observe

$$\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}=\left(\mathbf{J}_{[2]}-c \mathbf{I}\right) \left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{\intercal }= \mathbf{RQQ}^{\intercal}\mathbf{R}^{\intercal}=\mathbf{RR}^{\intercal}.$$

Thus,

Proposition 7

The square of the positive definite symmetric matrix J[2]cI has the following factorization

$$\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}=\mathbf{RR}^{\intercal },$$
(41)

where R is an upper triangular matrix. Furthermore

$$\left(\mathbf{J}-c\mathbf{I}\right)^{2}=\mathbf{R}^{\intercal}\mathbf{R}.$$

Next, we are ready to prove that there is a very close relation between the five diagonal semi-infinite symmetric matrix H defined in (32), and the lower triangular, semi-infinite, nonsingular matrix T defined in (34).

We will use the following notation. Let \(\boldsymbol {\bar {f}}\) be any semi-infinite column vector with polynomial entries \(\boldsymbol {\bar {f}} =[f_{0}(x),f_{1}(x),f_{2}(x),\ldots ]^{\intercal }\). Then, \(\langle \boldsymbol {\bar {f}},\boldsymbol {\bar {g}}\rangle\) will represent the given inner product of \(\boldsymbol {\bar {f}}\) and \(\boldsymbol {\bar {g}}\) componentwise, that is we get the following semi-infinite square matrix

$$\langle \boldsymbol{\bar{f}},\boldsymbol{\bar{g}}\rangle = \begin{bmatrix} \langle f_{0},g_{0}\rangle & \langle f_{0},g_{1}\rangle & \langle f_{0},g_{2}\rangle & {\cdots} \\ \langle f_{1},g_{0}\rangle & \langle f_{1},g_{1}\rangle & \langle f_{1},g_{2}\rangle & {\cdots} \\ \langle f_{2},g_{0}\rangle & \langle f_{2},g_{1}\rangle & \langle f_{2},g_{2}\rangle & {\cdots} \\ {\vdots} & {\vdots} & {\vdots} & \ddots \end{bmatrix} .$$

Next, let us recall (33), i.e. \(\boldsymbol {\bar {s}}^{M,\;N}= \boldsymbol {T \bar {p}}^{[2]}.\) Let us consider the inner product

$$\langle \boldsymbol{\bar{s}}^{M,\;N},\boldsymbol{\bar{s}}^{M,\;N}\rangle_{\lbrack 2]}=\langle \boldsymbol{T \bar{p}}^{[2]},\boldsymbol{T \bar{p}}^{[2]}\rangle _{\lbrack 2]}=\mathbf{T}\langle \boldsymbol{\bar{p}}^{[2]},\boldsymbol{\bar{p}}^{[2]}\rangle_{\lbrack 2]}\mathbf{T}^{\intercal }=\mathbf{T}\mathbf{T}^{\intercal },$$

where \(\langle \boldsymbol {\bar {p}}^{[2]},\boldsymbol {\bar {p}}^{[2]}\rangle _{\lbrack 2]}=\mathbf {I}\) because we deal with orthonormal polynomials. On the other hand, from (16) and (31), one has

$$\langle \boldsymbol{\bar{s}}^{M,\;N},\boldsymbol{\bar{s}}^{M,\;N}\rangle_{\lbrack 2]}=\langle (x-c)^{2}\boldsymbol{\bar{s}}^{M,\;N},\boldsymbol{\bar{s}}^{M,\;N}\rangle_{S}=\langle \boldsymbol{H \bar{s}}^{M,\;N},\boldsymbol{\bar{s}}^{M,\;N}\rangle_{S}= \mathbf{H }\langle \boldsymbol{\bar{s}}^{M,\;N},\boldsymbol{\bar{s}}^{M,\;N}\rangle _{S}=\mathbf{H},$$

where again \(\langle \boldsymbol {\bar {s}}^{M,\;N},\boldsymbol {\bar {s}}^{M,\;N}\rangle _{S}=\mathbf {I}\) since we deal with orthonormal polynomials. Thus, we have proved the following

Theorem 3

The five diagonal semi-infinite symmetric matrix H defined in (32), has the following Cholesky factorization

$$\mathbf{H}=\mathbf{TT}^{\intercal},$$

where T is the lower triangular, semi-infinite matrix defined in (34).

Finally, from (33) and (31), we have

$$(x-c)^{2}\boldsymbol{\bar{s}}^{M,\;N}=\boldsymbol{H \bar{s}}^{M,\;N}=(x-c)^{2}\boldsymbol{T \bar{p}}^{[2]}=\mathbf{T}(x-c)^{2}\boldsymbol{\bar{p}}^{[2]}.$$

According to (35), we get

$$\mathbf{T}(x-c)^{2}\boldsymbol{\bar{p}}^{[2]}=\mathbf{T}\left(\mathbf{J}_{[2]}-c \mathbf{I}\right)^{2}\boldsymbol{\bar{p}}^{[2]}.$$

Next, from (41), we obtain

$$\mathbf{T}\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}\boldsymbol{\bar{p}} ^{[2]}=\boldsymbol{T\mathbf{RR}^{\intercal}\bar{p}}^{[2]}=\boldsymbol{H \bar{s}} ^{M,N}=\mathbf{TT}^{\intercal}\boldsymbol{T \bar{p}}^{[2]}.$$

Therefore,

$$\boldsymbol{T\mathbf{RR}^{\intercal}\bar{p}}^{[2]}=\mathbf{TT}^{\intercal} \boldsymbol{T \bar{p}}^{[2]}$$

and, as a consequence,

$$\mathbf{\mathbf{RR}^{\intercal}}=\mathbf{T}^{\intercal}\mathbf{T}.$$

Theorem 4

For any positive Borel measure dμ supported on an infinite subset \(E\subseteq \mathbb {R}\), if J is the corresponding semi-infinite symmetric Jacobi matrix, and assuming that c does not belong to the interior of the convex hull of E, then for the 2 −iterated perturbed measure (xc)2dμ such that J[2] is the corresponding semi-infinite symmetric Jacobi matrix, we get

$$\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}=\mathbf{RR}^{\intercal}= \mathbf{T}^{\intercal }\mathbf{T}.$$

This is the symmetric version of Theorem 5.3 in [6], where the authors use other kind of factorization based on monic orthogonal polynomials.

7 An example with Laguerre polynomials

In [14] and Section 5, the coefficients of (30) for the monic Laguerre Sobolev-type orthogonal polynomials have been deduced. In the sequel, we illustrate the matrix approach presented in the previous section for the Laguerre case with a particular example. First, let us denote by \(\{\ell _{n}^{\alpha }(x)\}_{n\geq 0}\), \(\{\ell _{n}^{\alpha ,\;[2]}(x)\}_{n\geq 0}\), \(\{s_{n}^{M,\;N}(x)\}_{n\geq 0}\) the sequences of orthonormal polynomials with respect to the inner products (1), (10) and (16), respectively, when dμ(x) = xαexdxα > − 1, is the classical Laguerre weight function supported on \((0,+\infty )\).

In order to obtain compact expressions of the matrices, in this section, we will particularize all of those presented in the previous section for the choice of the parameters α = 0, c = − 1, M = 1, and N = 1. In these conditions, using any symbolic algebra package as, for example, Wolfram Mathematica, the explicit expressions of the sequences of orthogonal polynomials appearing in our study can be deduced.

From Section 5, we know

$$\mathbf{H}= \begin{bmatrix} \frac{5}{2} & \frac{11}{2\sqrt{2}} & \frac{1}{2}\sqrt{\frac{89}{2}} & & & \\ \frac{11}{2\sqrt{2}} & \frac{19}{2} & \frac{129}{\sqrt{89}} & \frac{1}{2} \sqrt{\frac{35705}{178}} & & \\ \frac{1}{2}\sqrt{\frac{89}{2}} & \frac{129}{\sqrt{89}} & \frac{5331}{178} & \frac{1503493}{178\sqrt{71410}} & 4\sqrt{\frac{26690173}{3177745}} & \\ & \frac{1}{2}\sqrt{\frac{35705}{178}} & \frac{1503493}{178\sqrt{71410}} & \frac{415128273}{6355490} & \frac{72140663342}{35705}\sqrt{\frac{2}{ 2375425397}} & {\ddots} \\ & & 4\sqrt{\frac{26690173}{3177745}} & \frac{72140663342}{35705}\sqrt{\frac{ 2}{2375425397}} & \frac{108116532681297}{952972626965} & {\ddots} \\ & & & {\ddots} & {\ddots} & \ddots \end{bmatrix}.$$

On the other hand, from (33), we obtain

$$\mathbf{T}= \begin{bmatrix} \sqrt{\frac{5}{2}} & & & & & \\ \frac{11}{2\sqrt{5}} & \frac{1}{2}\sqrt{\frac{69}{5}} & & & & \\ \frac{1}{2}\sqrt{\frac{89}{5}} & \frac{1601}{2\sqrt{30705}} & 4\sqrt{\frac{ 1777}{6141}} & & & \\ & 5\sqrt{\frac{7141}{12282}} & \frac{2911082\sqrt{2}}{\sqrt{389632847685}} & 6\sqrt{\frac{346922}{1714805}} & & \\ & & \sqrt{\frac{1841621937}{63447785}} & \frac{2555758506}{\sqrt{ 89202693674855485}} & 12\sqrt{\frac{4046188065}{52019147177}} & \\ & & & {\ddots} & {\ddots} & \ddots \end{bmatrix} .$$
(42)

Notice that if we multiply T by its transpose then one recovers H according to the statement of Proposition 3.

The tridiagonal symmetric Jacobi matrix associated with the standard orthonormal family \(\{\ell _{n}^{\alpha ,\;[2]}(x)\}_{n\geq 0}\) reads

$$\mathbf{J}_{[2]}= \begin{bmatrix} \frac{11}{5} & \frac{\sqrt{69}}{5} & & & & \\ \frac{\sqrt{69}}{5} & \frac{1501}{345} & \frac{2\sqrt{8885}}{69} & & & \\ & \frac{2\sqrt{8885}}{69} & \frac{790903}{122613} & \frac{3\sqrt{4975797}}{ 1777} & & \\ & & \frac{3\sqrt{4975797}}{1777} & \frac{1091564609}{128144801} & \frac{4 \sqrt{7450856157}}{72113} & \\ & & & \frac{4\sqrt{7450856157}}{72113} & \frac{3195035811691}{302365554333} & {\ddots} \\ & & & & {\ddots} & \ddots \end{bmatrix} ,$$

and from this expression it is straightforward to check Proposition 4. Next, from the symmetric Jacobi matrix

$$\mathbf{J}= \begin{bmatrix} 1 & 1 & & & & \\ 1 & 3 & 2 & & & \\ & 2 & 5 & 3 & & \\ & & 3 & 7 & 4 & \\ & & & 4 & 9 & {\ddots} \\ & & & & {\ddots} & \ddots \end{bmatrix}$$
(43)

associated with \(\{\ell _{n}^{\alpha }(x)\}_{n\geq 0}\), we can implement the Cholesky factorization of \(\mathbf {J-}c\mathbf {I=\mathbf {LL}^{\intercal }}\) in such a way the lower bidiagonal matrix is

$$\mathbf{L}= \begin{bmatrix} \sqrt{2} & & & & & \\ \frac{1}{\sqrt{2}} & \sqrt{\frac{7}{2}} & & & & \\ & 2\sqrt{\frac{2}{7}} & \sqrt{\frac{34}{7}} & & & \\ & & 3\sqrt{\frac{7}{34}} & \sqrt{\frac{209}{34}} & & \\ & & & 4\sqrt{\frac{34}{209}} & \sqrt{\frac{1546}{209}} & \\ & & & & {\ddots} & \ddots \end{bmatrix} .$$

Following (37), we commute the order of L and its transpose to obtain \(\mathbf {\mathbf {L}^{\intercal }\mathbf {L}=J}_{[1]}-c \mathbf {I}\), where

$$\mathbf{J}_{[1]}= \begin{bmatrix} \frac{3}{2} & \frac{\sqrt{7}}{2} & & & & \\ \frac{\sqrt{7}}{2} & \frac{51}{14} & \frac{4\sqrt{17}}{7} & & & \\ & \frac{4\sqrt{17}}{7} & \frac{1359}{238} & \frac{3\sqrt{1463}}{34} & & \\ & & \frac{3\sqrt{1463}}{34} & \frac{55071}{7106} & \frac{8\sqrt{13141}}{209} & \\ & & & \frac{8\sqrt{13141}}{209} & \frac{3159027}{323114} & {\ddots} \\ & & & & {\ddots} & \ddots \end{bmatrix} .$$

The computation of a new Cholesky factorization of J[1]cI yields \(\mathbf {\mathbf {L}_{1}\mathbf {L}_{1}^{\intercal }}\), where

$$\mathbf{L}_{1}= \begin{bmatrix} \sqrt{\frac{5}{2}} & & & & & \\ \sqrt{\frac{7}{10}} & \sqrt{\frac{138}{35}} & & & & \\ & 2\sqrt{\frac{170}{483}} & \sqrt{\frac{12439}{2346}} & & & \\ & & 3\sqrt{\frac{14421}{60418}} & \sqrt{\frac{2451842}{371393}} & & \\ & & & 4\sqrt{\frac{2747242}{15071617}} & \sqrt{\frac{876324669}{111486698}} & \\ & & & & {\ddots} & \ddots \end{bmatrix} .$$

Commuting the order of the matrices in the decomposition then we finally deduce the expression (39), i.e. \(\mathbf {L}_{1}^{\intercal } \mathbf {L}_{1}=\mathbf {J}_{[2]}-c\mathbf {I}\). With these last matrices in mind we find R and Q at Proposition 5. Thus,

$$\mathbf{Q}=\mathbf{LL}_{1}^{-\intercal }= \begin{bmatrix} \frac{2}{\sqrt{5}} & \frac{-7}{\sqrt{345}} & \frac{68}{\sqrt{122613}} & \frac{-1254}{\sqrt{128144801}} & \frac{12368\sqrt{3}}{\sqrt{100788518111}} & {\cdots} \\ \frac{1}{\sqrt{5}} & \frac{14}{\sqrt{345}} & \frac{-136}{\sqrt{122613}} & \frac{2508}{\sqrt{128144801}} & \frac{-24736\sqrt{3}}{\sqrt{100788518111}} & {\cdots} \\ & 2\sqrt{\frac{5}{69}} & \frac{-238}{\sqrt{122613}} & \frac{-4389}{\sqrt{ 128144801}} & \frac{43288\sqrt{3}}{\sqrt{100788518111}} & {\cdots} \\ & & 3\sqrt{\frac{69}{1777}} & \frac{7106}{\sqrt{128144801}} & \frac{-210256 }{\sqrt{302365554333}} & {\cdots} \\ & & & 4\sqrt{\frac{1777}{72113}} & \frac{323114}{\sqrt{302365554333}} & {\cdots} \\ & & & & {\ddots} & \ddots \end{bmatrix}$$
(44)

and

$$\mathbf{R}=\left(\mathbf{LL}_{1}\right)^{\intercal }= \begin{bmatrix} \sqrt{5} & \frac{6}{\sqrt{5}} & \frac{2}{\sqrt{5}} & & & \\ & \sqrt{\frac{69}{5}} & \frac{88}{\sqrt{345}} & 2\sqrt{\frac{15}{23}} & & \\ & & \sqrt{\frac{1777}{69}} & 790\sqrt{\frac{3}{40871}} & 12\sqrt{\frac{69}{ 1777}} & \\ & & & \sqrt{\frac{72113}{1777}} & \frac{99504}{\sqrt{128144801}} & \ddots \\ & & & & \sqrt{\frac{4192941}{72113}} & {\ddots} \\ & & & & & \ddots \end{bmatrix} .$$
(45)

Observe that Q is a matrix whose rows are orthogonal vectors, and multiplying (44) above by its transpose (in this order) we get

$$\mathbf{QQ}^{\intercal}\approx \begin{bmatrix} 0.99657 & 0.0068687 & -0.27601 & 0.019461 & -0.029907 & {\cdots} \\ 0.0068687 & 0.98626 & 0.55201 & -0.038923 & 0.059815 & {\cdots} \\ -0.27601 & 0.55201 & 0.95793 & -0.73549 & -0.10468 & {\cdots} \\ 0.019461 & -0.038923 & -0.73549 & 0.88972 & 0.16948 & {\cdots} \\ -0.029907 & 0.059815 & -0.10468 & 0.16948 & 0.73956 & {\cdots} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & \ddots \end{bmatrix} \approx \mathbf{I}.$$

Notice that we implement our algorithm with finite matrices. Notwithstanding the foregoing, multiplying the transpose of (44) by (44 ), then we have \(\mathbf {Q}^{\intercal }\mathbf {Q}=\mathbf {I}\).

Employing these matrices above it is easy to test numerically expressions \(\mathbf {H}=\mathbf {T}\left (\mathbf {J}_{[2]}-c\mathbf {I}\right )^{2}\mathbf {T }^{-1}\), JcI = QR, and J[2]cI = RQ according to the statements of Propositions 4, 5 and 6 respectively. It is also possible to check that using the numerical expression (42), and alternatively the expression (45), we recover

$$\left(\mathbf{J}_{[2]}-c\mathbf{I}\right)^{2}=$$
$$\begin{bmatrix} 13 & \frac{118}{\sqrt{69}} & 2\sqrt{\frac{1777}{345}} & & & \\ \frac{118}{\sqrt{69}} & \frac{2681}{69} & \frac{227476}{69\sqrt{8885}} & 2 \sqrt{\frac{1081695}{40871}} & & \\ 2\sqrt{\frac{1777}{345}} & \frac{227476}{69\sqrt{8885}} & \frac{9460213}{ 122613} & \frac{84432374\sqrt{\frac{3}{1658599}}}{1777} & 36\sqrt{\frac{ 32145881}{128144801}} & \\ & 2\sqrt{\frac{1081695}{40871}} & \frac{84432374\sqrt{\frac{3}{1658599}}}{ 1777} & \frac{16364422385}{128144801} & \frac{628405520264}{72113\sqrt{ 7450856157}} & {\ddots} \\ & & 36\sqrt{\frac{32145881}{128144801}} & \frac{628405520264}{72113\sqrt{ 7450856157}} & \frac{57572534044081}{302365554333} & {\ddots} \\ & & & {\ddots} & {\ddots} & \ddots \end{bmatrix} ,$$
(46)

according to Proposition 4.

Finally, Proposition 7 can be numerically tested from (43), (45) and (46).