1 Introduction

Many fundamental problems in interpolation and approximation give rise to interesting linear algebra computations related to Gram and Wronskian matrices. Unfortunately, these matrices are usually ill conditioned and consequently, the mentioned algebraic computations lose accuracy as their dimension increases.

Gram matrices arise in the computation of the best approximation onto a subspace of a space equipped with an inner product. Hilbert matrices are well-known notoriously ill-conditioned Gram matrices corresponding to monomial bases. In [11], accurate algebraic computations with Hilbert matrices are achieved by using an elegant representation of them as a product of nonnegative bidiagonal matrices. As far as the authors’ knowledge, up to now, computations with high relative accuracy (HRA) using Gram matrices of other bases have not been obtained.

Wronskian matrices arise in many applications, such as Hermite interpolation problems and, in particular, Taylor interpolation problems. In [15], the bidiagonal factorization of the Wronskian matrix of the monomial basis of the space of polynomials of a given degree and the bidiagonal factorization of the Wronskian matrix of the basis of exponential polynomials were obtained. Furthermore, in [16] a procedure to accurately compute the bidiagonal decomposition of collocation and Wronskian matrices of the wide family of Jacobi polynomials is proposed. The obtained results are used to get accurate computations using collocation and Wronskian matrices of well-known types of Jacobi polynomials.

The geometric distribution has applications in population and econometric models and the Poisson distribution is popular for modeling the number of times an event occurs in an interval of time or space. Associated to these distributions, the corresponding bases can be defined (see Sects. 3 and 5, respectively). In fact, the Poisson basis also plays a useful role in computer-aided geometric design (see [9, 20]). Collocation matrices of both bases were analyzed in [14], where it was proved their total positivity and their bidiagonal factorization was obtained with HRA. Using such bidiagonal factorization, the algorithms presented in [12] can be applied to solve with HRA algebraic problems. The bidiagonal factorization of the collocation matrices of other interesting bases can be found in [2, 3, 13, 17, 18].

In this paper, we deal with Gram and Wronskian matrices of the two types of bases mentioned in the previous paragraph. The total positivity of these matrices is analyzed. In contrast with their corresponding collocation and Gram matrices, we show that these Wronskian matrices are not totally positive. However, we relate them with other totally positive matrices, so that their associated bidiagonal factorizations can be used to provide accurate algorithms for the algebraic computations mentioned before.

We now describe the layout of the paper. In Sect. 2, we present basic notations and preliminary results on total positivity, Neville elimination, bidiagonal factorization, HRA and Wronskian matrices of monomial bases. In Sect.  3, we prove that the Gram matrix of geometric bases is strictly totally positive (STP) and a bidiagonal factorization for the resolution with HRA of related algebraic problems is proposed. The bidiagonal factorization of Wronskian matrices of geometric bases is also obtained in Sect.  4, where the methods to derive algorithms with HRA are also shown. Section 5 proves that Gram matrices of Poisson bases are STP. Furthermore, the bidiagonal factorization to derive accurate algorithms is deduced. In Sect. 6, Wronskian matrices of Poisson bases are considered and the method to derive accurate computations is given. Finally, Sect. 7 illustrates numerical experiments confirming the accuracy of the presented methods for the computation of eigenvalues, singular values, inverses or the solution of some linear systems related to Gram and Wronskian matrices of the considered bases. The complexity of the algorithms for computing the mentioned algebraic problems is comparable to that of the traditional LAPACK algorithms, which, as we shall see, deliver no such accuracy.

2 Notations and preliminary results

Let us recall that given a Hilbert space U under an inner product \(\langle \cdot , \cdot \rangle \) and an \((n+1)\)-dimensional subspace V generated by a basis \((f_0,\ldots , f_n)\), the computation of the best approximation in V, with respect to the norm defined in U, of a given \(u\in U\) is \(v=\sum _{i=0}^n c_{i+1}f_i\), where \(c = (c_1 \ldots , c_{n+1})^T\) is the solution of the linear system \(Gc = b\) and \(G=(G_{i,j})_{1\le i,j\le n+1}\) is the Gram matrix such that

$$\begin{aligned} G_{i,j}:=\langle f_{i-1}, f_{j-1} \rangle \end{aligned}$$

and \(b=(b_{i })_{1\le i \le n+1}\) with \(b_i :=\langle f_{i-1}, u \rangle \).

Given a basis \((f_0, \ldots , f_n)\) of a space of functions with n continuous derivatives at \(x\in {\mathbb {R}}\), its Wronskian matrix at x is

$$\begin{aligned} W(f_0, \ldots , f_n)(x) :=(f_{j-1}^{(i-1)}(x))_{i,j=1, \ldots , n+1}, \end{aligned}$$

where \(f^{(0)}(x):= f(x)\), \(f^{(1)}(x):=f'(x)\) and \(f^{(i)}(x)\), \(i\le n\), denotes the i-th derivative of f at x.

We say that a matrix is totally positive (TP) if all its minors are nonnegative and strictly totally positive (STP) if all its minors are positive. In [1, 5, 21] interesting applications of TP matrices can be found.

The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [6,7,8]). Given an \((n+1)\times (n+1)\), nonsingular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\), the NE process calculates a sequence of matrices

$$\begin{aligned} A^{(1)} :=A\rightarrow A^{(2)} \rightarrow \cdots \rightarrow A^{(n+1)}, \end{aligned}$$
(1)

so that the entries of \(A^{(k+1)}\) below the main diagonal in the k first columns, \(1\le k\le n\), are zeros and so, \(A^{(n+1)} \) is an upper triangular matrix. The matrix \(A^{(k+1)} =(a_{i,j}^{(k+1)})_{1\le i,j\le n+1}\) is computed from \(A^{(k)}=(a_{i,j}^{(k)})_{1\le i,j\le n+1}\) by means of the following relations

$$\begin{aligned} a_{i,j}^{(k+1)}:={\left\{ \begin{array}{ll} a^{(k)}_{i,j}, \quad &{}\text {if} \ 1\le i \le k, \\ a^{(k)}_{i,j}- \frac{a_{i,k}^{(k)}}{ a_{i-1,k}^{(k)} } a^{(k)}_{i-1,j} , \quad &{}\text {if } k+1\le i,j\le n+1 \text { and } a_{i-1,k}^{(k)}\ne 0,\\ a^{(k)}_{i,j}, \quad &{}\text {if} \quad k+1\le i\le n+1 \text { and } a_{i-1,k}^{(k)}= 0. \end{array}\right. } \end{aligned}$$
(2)

The (ij) pivot of the NE process of the matrix A is

$$\begin{aligned} p_{i,j} := a_{i,j}^{(j)}, \quad 1\le j\le i\le n+1, \end{aligned}$$
(3)

and, in particular, we say that \(p_{i,i}\) is the i-th diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure.

The (ij) multiplier of the NE process of the matrix A is

$$\begin{aligned} m_{i,j}:={\left\{ \begin{array}{ll} a_{i,j}^{(j)} / a_{i-1,j}^{(j)}={ p_{i,j} }/{ p_{i-1,j}}, &{} \text {if }\, a_{i-1,j}^{(j)} \ne 0,\\ 0, &{} \text {if }\, a_{i-1,j}^{(j)} = 0, \end{array}\right. } ,\quad 1\le j < i\le n+1. \end{aligned}$$
(4)

NE is a nice tool to deduce that a given matrix is TP or STP, as shown in this characterization derived from Theorem 4.2 and the arguments of p. 116 of [8].

Theorem 1

A nonsingular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) is TP if and only if it admits a factorization of the form

$$\begin{aligned} A=F_nF_{n-1}\cdots F_1 D G_1 \cdots G_{n-1} G_n, \end{aligned}$$
(5)

where \(F_i\) and \(G_i\) are the TP, lower and upper triangular bidiagonal matrices given by

$$\begin{aligned} { F_i=\left( \begin{array}{cccccccccc} 1 \\ 0 &{} 1 \\ &{} \ddots &{} \ddots \\ &{} &{} &{} 0 &{} 1 \\ &{} &{} &{} &{} m_{i+1,1} &{} 1 \\ &{} &{} &{} &{} &{} m_{i+2,2} &{} 1 \\ &{} &{} &{} &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} &{} &{} &{} m_{n+1,n+1-i} &{} 1 \end{array} \right) , \quad G_i^T=\left( \begin{array}{cccccccccc} 1 \\ 0 &{} 1 \\ &{} \ddots &{} \ddots \\ &{} &{} &{} 0 &{} 1 \\ &{} &{} &{} &{} \widetilde{m}_{i+1,1} &{} 1 \\ &{} &{} &{} &{} &{} \widetilde{m}_{i+2,2} &{} 1 \\ &{} &{} &{} &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} &{} &{} &{} \widetilde{m}_{n+1,n+1-i} &{} 1 \end{array} \right) ,}\nonumber \\ \end{aligned}$$
(6)

and \(D=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \) has positive diagonal entries. The diagonal entries \(p_{i,i}\) of D are the diagonal pivots of the NE process of A and the elements \(m_{i,j}\), \(\widetilde{m}_{i,j}\) are nonnegative and coincide with the multipliers (4) of the NE process of A and \(A^T\), respectively. If, in addition, the entries \(m_{ij}\), \(\widetilde{m}_{ij}\) satisfy

$$\begin{aligned} m_{ij}=0 \Rightarrow m_{hj}=0, \quad \forall \, h>i, \quad \text {and} \quad \widetilde{m}_{ij}=0 \Rightarrow \widetilde{m}_{ik}=0, \quad \forall \, k>j, \end{aligned}$$
(7)

then the decomposition (5) is unique.

In [10], the bidiagonal factorization (5) of an \((n+1)\times (n+1)\) nonsingular and TP matrix A is represented by means of a matrix \(BD(A)=( BD(A)_{i,j})_{1\le i,j\le n+1} \) such that

$$\begin{aligned} BD(A)_{i,j}:={\left\{ \begin{array}{ll} m_{i,j}, &{} \text {if } i>j, \\ p_{i,i}, &{} \text {if } i=j, \\ \widetilde{m}_{ j ,i}, &{} \text {if } i<j. \\ \end{array}\right. } \end{aligned}$$
(8)

Remark 1

By Theorem 4.3 of [8], if \(m_{i,j}>0\), \(\widetilde{m}_{i,j}>0\), \(1\le j<i\le n+1\), and \(p_{i,i}>0\), \(1\le i\le n+1\), then A is STP.

Remark 2

Using the results in [6,7,8], given the bidiagonal factorization (5) of a nonsingular and TP matrix A, a bidiagonal decomposition of \(A^{-1}\) can be computed as

$$\begin{aligned} A^{-1}= \widetilde{G}_1\cdots \widetilde{G}_{n-1} \widetilde{G}_n D^{-1}\widetilde{F}_n \widetilde{F}_{n-1}\cdots \widetilde{F}_1, \end{aligned}$$
(9)

where \(\widetilde{F}_i\) and \(\widetilde{G}_i\), is the lower and upper bidiagonal matrix of the form (6), obtained when replacing the off-diagonal entries \(\{m_{i+1,1},\ldots , m_{n+1,n+1-i}\} \) and \(\{{\tilde{m}}_{i+1,1},\ldots , {\tilde{m}}_{n+1,n+1-i}\} \) by \(\{-m_{i+1,i},\ldots ,-m_{n+1,i} \}\) and \(\{-\tilde{m}_{i+1,i},\ldots ,- \tilde{m}_{n+1,i}\}\), respectively.

Remark 3

If a matrix A is nonsingular and TP, then \(A^{T}\) is also a nonsingular and TP matrix. Furthermore, by Theorem 1,

$$\begin{aligned} A^{T}={G}_n^{T}{G}_{n-1}^{T}\cdots {G}_1^{T} D {F}_1^{T} \cdots {F}_{n-1}^{T}{F}_n^T, \end{aligned}$$
(10)

where \({F}_i\) and \({G}_i\), is the lower and upper bidiagonal matrix in (6). Consequently, if A is symmetric and (7) is satisfied, taking into account the unicity of the factorization, we immediately deduce that \(G_i=F_i^{T}\), \(i=1,\ldots ,n\), and then we have

$$\begin{aligned} A=F_nF_{n-1}\cdots F_1 D F_1^T \cdots F_{n-1}^T F_n^T, \end{aligned}$$

where \(F_i\) is the lower bidiagonal matrix in (6), whose off-diagonal entries coincide with the multipliers of the NE process of A and D is the diagonal matrix with the pivots.

We say that a real x is computed with high relative accuracy (HRA) whenever the computed value \({\tilde{x}}\) satisfies

$$\begin{aligned} \frac{ \Vert x-{\tilde{x}} \Vert }{ \Vert x \Vert } < Ku, \end{aligned}$$

where u is the unit round-off and \(K>0\) is a constant independent of the arithmetic precision. Clearly, HRA implies that the relative errors in the computations have the same order as the machine precision. It is well known that a sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, that is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [4, 10]).

If the bidiagonal factorization (5) of a nonsingular and TP matrix A can be computed with HRA then, the computation of its eigenvalues and singular values, the computation of \(A^{-1}\) and even the resolution of \(Ax=b\) for vectors b with alternating signs can be also computed with HRA using the algorithms provided in [11].

Finally, let \((p_{0} ,\ldots ,p_{n})\) be the monomial basis

$$\begin{aligned} p_{i}(x):=x^i, \quad i=0,\ldots ,n. \end{aligned}$$
(11)

of the space \({{\mathbf {P}}}^n\) of polynomials of degree less than or equal to n.

The following result will be used later and restates Corollary 1 of [15] providing the bidiagonal factorization (5) of \(W(p_{0},\ldots ,p_{n})(x)\), \(x\in {{\mathbb {R}}}\).

Proposition 1

Let \((p_{0},\ldots ,p_{n})\) be the monomial basis given in (11). For any \(x\in {\mathbb {R}}\), the Wronskian matrix \(W(p_{0},\ldots ,p_{n})(x)\) is nonsingular and can be factorized as follows,

$$\begin{aligned} W(p_{0},\ldots ,p_{n})(x)=D G_{1,n} \cdots G_{n-1,n-1}G_{n,n}, \end{aligned}$$
(12)

where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the upper triangular bidiagonal matrices in (6) with

$$\begin{aligned} {\widetilde{m}}_{k,k-i} =x,\quad i+1\le k\le n+1. \end{aligned}$$
(13)

Moreover, if \(x>0\) then \(W(p_0,\ldots ,p_n )(x)\) is nonsingular and TP, its bidiagonal decomposition (5) is given by (12) and (13) and it can be computed with HRA.

In [15], using this result, accurate computations with Wronskian matrices of monomial bases are achieved.

In the sequel we shall consider Gram and Wronskian matrices of geometric and Poisson bases. We will now deduce their bidiagonal decomposition (5) and see that algebraic computations with HRA can be achieved for all considered cases.

3 Total positivity and accurate computations with Gram matrices of geometric bases

The geometric distribution has many applications in population and econometric models. Let us recall that the probability of k failures up to obtain a success is given by

$$\begin{aligned} \displaystyle P(k{\text { failures until a success}}):=(1-t)^{k}t, \end{aligned}$$

where the probability of success is \(t \in [0,1]\). Then, for a given \(n\in {\mathbb {N}}\), we can define an \((n+1)\)-dimensional polynomial basis \((g^n_0,\ldots , g_n^n)\), where

$$\begin{aligned} g_{k}^n(x):=(1-x)^{k}x, \quad k=0,\ldots ,n. \end{aligned}$$
(14)

In Sect. 4 of [14], it is proved that the collocation matrix at positive values \(0<x_1<\cdots<x_{n+1}<1\) of \((g^n_0,\ldots , g_n^n)\) is STP. Furthermore, the bidiagonal factorization (5) of the collocation matrix \((g_{j-1}^n(x_{i-1} ))_{1\le i,j\le n+1}\) is deduced by taking into account that each basis function \(g_k^n(x)\) can be obtained by scaling the polynomials \((1-x)^k\) with the positive function \(\varphi (x)=x\), \(k=0,\ldots ,n\). Using this factorization, HRA algebraic computations with collocation matrices of geometric bases \((g^n_0,\ldots , g_n^n)\) are achieved.

The geometric basis \((g^n_0,\ldots , g_n^n)\) given in (14) belongs to the vector space \(L_2[0,1]\) of square integrable functions, which is a Hilbert space under the inner product

$$\begin{aligned} \langle f,g\rangle := \int _{0}^1 f(t)g(t)\, dt. \end{aligned}$$
(15)

The Gram matrix of \((g^n_0,\ldots , g_n^n)\) is the symmetric matrix \(G =(G_{i,j} )_{1\le i.j\le n+1}\), with

$$\begin{aligned} G_{i,j} := \int _0^1 g^n_{i-1} (t) g^n_{j-1}(t)\,dt = \frac{ 2 }{ ( i+j-1) ( i+j) ( i+j+1) }= \frac{ 1 }{ 3 {i+j+1 \atopwithdelims ()3} }, \quad 1\le i,j\le n+1.\nonumber \\ \end{aligned}$$
(16)

The following result proves that G is STP, providing the multipliers and the diagonal pivots of the NE of G.

Theorem 2

The \((n+1)\times (n+1)\) Gram matrix G described by (16) is STP. Moreover, G admits a factorization of the form (5) such that

$$\begin{aligned} G =F_{n,n}F_{n-1,n}\cdots F_{1,n} D_n G_{1,n} \cdots G_{n-1,n } G_{n,n}, \end{aligned}$$
(17)

where \(F_{i,n}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the lower and upper triangular bidiagonal matrices given by (6) and \(D=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \). The entries \(m_{i,j}, \widetilde{m}_{i,j}\) and \(p_{i,i}\) are given by

$$\begin{aligned} m_{i,j}= & {} \frac{(i-1)( i+1)}{ ( i+j ) ( i+j+1) } ,\quad \widetilde{m}_{i,j} = {m}_{i,j},\quad 1\le j<i\le n+1, \end{aligned}$$
(18)
$$\begin{aligned} p_{1,1}= & {} 1/3, \quad p_{i+1,i+1}= \frac{ i^2(i+2)^2}{(2i+1)(2i+2)^2(2i+3)} p_{i,i} , \quad 1\le i\le n. \end{aligned}$$
(19)

Proof

Let \(G^{(1)}:=G\) and \(G^{(k)} =(G_{ij}^{(k)})_{1\le i,j \le n+1}\), \(k=2,\ldots , n+1\), be the computed matrices after \(k-1\) steps of the NE of G. Using induction on k, let us first see that

$$\begin{aligned} G_{i,j}^{(k)}= \frac{ {j-1\atopwithdelims ()k-1} }{(k+2) {i+k \atopwithdelims ()k-1} {i+j+1\atopwithdelims ()k+2} } , \quad 1\le i,j\le n+1. \end{aligned}$$
(20)

For \(k=1\), \(G_{i,j}^{(1)}= \frac{ 1 }{ 3 {i+j+1 \atopwithdelims ()3} } \) and (20) holds. Let us now suppose that (20) holds for some \(k\in \{1,\ldots ,n-1 \}\). Then, it can be easily checked that

$$\begin{aligned} \frac{G_{i,k}^{(k)}}{G_{i-1,k}^{(k)}}= \frac{ {i+k -1 \atopwithdelims ()k-1} {i+k \atopwithdelims ()k+2} }{ {i+k \atopwithdelims ()k-1} {i+k +1 \atopwithdelims ()k+2} } = \frac{ (i-1)(i+1) }{ (i+k )( i+k+1 ) } , \quad i=k+1,\ldots ,n+1. \end{aligned}$$

Since \(G_{i,j}^{(k+1)}=G_{i,j}^{(k)}- \frac{G_{i,k}^{(k)}}{G_{i-1,k}^{(k)}}G_{i-1,j}^{(k)}\), we have

$$\begin{aligned} G_{i,j}^{(k+1)}= & {} G_{i,j}^{(k)}- \frac{ (i-1)(i+1) }{ (i+k )( i+k+1 ) } G_{i-1,j}^{(k)} \nonumber \\= & {} \frac{ {j-1\atopwithdelims ()k-1} (k-1)! ( i+1)! (k+2)! (i+j-k-2)!}{ (k+2)(i+k)!(i+j)!}\left( \frac{ i+j-k-1 }{ i+j+1} - \frac{i-1 }{ i+k+1} \right) . \nonumber \\ \end{aligned}$$
(21)

Finally, taking into account the following equalities

$$\begin{aligned} \frac{ i+j-k-1 }{ i+j+1} - \frac{i-1 }{ i+k+1} =\frac{(k+2)(j-k)}{(i+j+1)(i+k+1)}, \quad (j-k) {j-1\atopwithdelims ()k-1} (k-1)! = {j-1\atopwithdelims ()k } k !, \end{aligned}$$

we can write

$$\begin{aligned} G_{i,j}^{(k+1)}= & {} \frac{ {j-1\atopwithdelims ()k } }{ (k+3) } \frac{ k! ( i+1)! }{ (i+k+1)! }\frac{ (k+3)! (i+j-k-2)!}{(i+j+1)!} = \frac{ {j-1\atopwithdelims ()k }}{(k+3) {i+k+1\atopwithdelims ()k} {i+j+1\atopwithdelims ()k+3} },\nonumber \\ \end{aligned}$$
(22)

and formula (20) also holds for \(k+1\).

Now, by (3) and (20), we can easily deduce that the pivots \(p_{i,j}\) of the NE of G satisfy

$$\begin{aligned} p_{i,j}&=G_{i,j}^{(j)}= \frac{ 1 }{(j+2) {i+j \atopwithdelims ()j-1} {i+j+1\atopwithdelims ()j+2} }, \quad 1\le j\le i\le n+1, \end{aligned}$$
(23)

and, for the particular case \(i=j\),

$$\begin{aligned} p_{i,i}&= \frac{ 1 }{(i+2) {2i \atopwithdelims ()i-1} {2i+1\atopwithdelims ()i+2} } = \frac{ \left( (i-1)! (i+1)! \right) ^2 }{(2i )! (2i+1)! } , \quad 1\le i\le n+1. \end{aligned}$$
(24)

Clearly, for \(i=1\) in (24) we obtain \(p_{1,1}=1/3\). Moreover, it can be checked that

$$\begin{aligned} \frac{p_{i+1,i+1}}{p_{i,i}} =\frac{ i^2(i+2)^2}{(2i+1)(2i+2)^2(2i+3)}, \end{aligned}$$

and (19) holds.

Now, let us observe that, by formula (24), the pivots of the NE of G are positive and so, this elimination can be performed without row exchanges.

Finally, using (4) and (23), the multipliers \(m_{i,j}\) can be written as

$$\begin{aligned} m_{i,j}&=\frac{ p_{i,j}}{p_{i-1,j} } = \frac{( i-1)( i+1)}{ ( i+j) ( i+j+1)}, \quad 1\le j<i\le n+1. \end{aligned}$$
(25)

Taking into account that G is a symmetric matrix, we conclude that \({\widetilde{m}}_{i,j}=m_{i,j}\), \(1\le j<i\le n+1\) and (18) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).\(\square \)

As a direct consequence of Theorem  2 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the geometric basis \((g^n_0,\ldots , g_n^n)\) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following result.

Corollary 1

The bidiagonal factorization (5) of the Gram matrix G described by (16), corresponding to the geometric basis \((g^n_0,\ldots , g_n^n)\) in (14), can be computed with HRA. Consequently, the computation of the eigenvalues and singular values of G, the computation of the matrix \(G^{-1}\), as well as the resolution of linear systems \(Gc = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.

Section 7 will show the accurate results obtained when solving algebraic problems with Gram matrices of geometric bases, using the bidiagonal factorization (5) provided by Theorem 2 and the functions in the library TNTool of [12].

4 Bidiagonal factorization of Wronskian matrices of geometric bases

In this section we are going to deduce a bidiagonal decomposition of the form (5) of the Wronskian matrix of geometric bases (14). We shall see that, using this factorization and the algorithms in [12], many algebraic problems related to these matrices can be solved with HRA.

Let us start with some auxiliary results.

Lemma 1

For a given \(t \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), let \(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that

$$\begin{aligned} u_{i-1,i}^{(k,n)}:=0,\quad i=2,\ldots , k ,\qquad u_{i-1,i}^{(k,n)}:=t, \quad i= k+1 ,\ldots , n+1. \end{aligned}$$

Then, \(U_n:=U_{1,n}\cdots U_{n,n}\), is an upper triangular matrix and

$$\begin{aligned} U_{n}= (u_{i,j}^{(n)})_{1 \le i, j \le n+1},\quad u_{i,j}^{(n)}= { j-1 \atopwithdelims ()i-1} t^{j-i}, \quad 1\le i \le j \le n+1. \end{aligned}$$
(26)

Proof

Since the matrix \(U_n\) is the product of upper triangular bidiagonal matrices, we deduce that \(U_n\) is upper triangular. Now, using Proposition 1, we can deduce that

$$\begin{aligned} U_{n}= \left( (i-1)! (p_{j-1}(t))^{(i-1)} \right) _{i,j=1, \ldots , n+1}, \end{aligned}$$

where \(p_{j}(t):=t^j\), \(j=0,\ldots ,n\). Finally, taking into account that

$$\begin{aligned} (p_{j}(t))^{(i)} = { j \atopwithdelims ()i } t^{j-i},\quad 0\le i \le j \le n, \end{aligned}$$

equalities (26) are immediately obtained. \(\square \)

Theorem 3

Let \((g_0^n,\ldots , g_n^n)\) be the \((n+1)\)-dimensional geometric basis defined in (14). The Wronskian matrix \(W:=W( g_0,\ldots ,g_n)(x)\) at a given \(x\in {\mathbb {R}}\), \(x\ne 0\), admits a factorization of the form

$$\begin{aligned} W= L_1 D_n U_{1,n} \cdots U_{n-1,n} U_{n,n}, \end{aligned}$$
(27)

where \(L_{1}=(l_{i,j}^{(1,n)})_{1\le j, i \le n+1}\) is the lower triangular bidiagonal matrix with unit diagonal entries, such that

$$\begin{aligned} l_{i,i-1}^{(1,n)}= \frac{ i-1}{ x}, \quad i=2,\ldots , n+1, \end{aligned}$$
(28)

\(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the upper triangular bidiagonal matrices with unit diagonal entries, such that

$$\begin{aligned} u_{i-1,i}^{(k,n)}=0,\quad i=2,\ldots , k,\qquad u_{i-1,i}^{(k,n)}=1-x , \quad i= k+1 ,\ldots , n+1, \end{aligned}$$
(29)

and \(D_n\) is the diagonal matrix \(D_n=\mathrm{diag}\left( d_{1}^{(n )}, \ldots ,d_{n+1}^{(n )}\right) \) with

$$\begin{aligned} d_{i }^{(n )}= (-1) ^{i-1} (i-1)! x, \quad i=1,\ldots ,n+1. \end{aligned}$$
(30)

Proof

First, let us observe that \(L_1D_n =(\widetilde{l}_{i,j}^{(n)})_{1\le i,j\le n+1}\) is the lower triangular bidiagonal matrix such that

$$\begin{aligned} \widetilde{l}_{i,i}^{(n)}= (-1)^{i-1}(i-1)!x, \quad i=1,\ldots , n+1,\qquad l_{i,i-1}^{(n)}= (-1)^{i}(i-1)!, \quad i=2,\ldots , n+1. \end{aligned}$$
(31)

On the other hand, using Lemma 1 with \(t:= 1-x \), we derive that \(U_n:=U_{1,n}\cdots U_{n-1,n}U_{n,n}\) is the \((n+1)\times (n+1)\) upper triangular matrix described by

$$\begin{aligned} U_n=(u_{i,j}^{(n)})_{1\le i,j\le n+1},\quad u_{i,j}^{(n)}= {j-1 \atopwithdelims ()i-1}\left( 1-x \right) ^{j-i}, \quad 1\le i\le j\le n+1. \end{aligned}$$
(32)

In order to prove the result, taking into account (27), (31) and (32), we have to check that

$$\begin{aligned} (g_{j-1}^n )^{(i-1)} (x)={\left\{ \begin{array}{ll} 0, &{} j=1,\ldots ,i-2 ,\\ (-1)^i(i-1)!, &{} j=i-1 ,\\ (-1)^{i-1}(i-1)! \left( 1-i+ix\right) , &{}j=i, \\ (-1)^{i-1}(i-1)!\frac{1}{j-i+1} {j-1\atopwithdelims ()i-1}(1-x)^{j-i} (1-i+jx) , &{} j> i, \end{array}\right. } \end{aligned}$$
(33)

for \(1\le i,j \le n+1\). Since

$$\begin{aligned} (g_{j-1}^n )^{(i-1)} (x) = \left( \sum _{k=0}^{j-1} {j-1\atopwithdelims ()k} (-1)^k x ^{k+1} \right) ^{(i-1)}, \end{aligned}$$

equalities (33) readily follow for \(1\le j\le i\) and \(i=1,\ldots ,n+1\). For \(j>i\), (33) can be proved by induction on i. If \(i=1\), we clearly have \( (-1)^0 (1-1)! {j-1\atopwithdelims ()0 } (1-x)^{j-1} jx / j = g_{j-1}(x)\), for \(j=2,\ldots ,n+1\). Now, let us suppose that (33) holds for \(i>1\) and \(j> i\). Then, we can write

$$\begin{aligned} (g_{j-1} ^n)^{(i)} (x)= & {} (-1)^{i-1} (i-1)! \frac{1}{j-i+1} {j-1\atopwithdelims ()i-1} \left( (1-x)^{j-i} (1-i+jx) \right) ' \nonumber \\= & {} (-1)^{i } (i-1)! {j-1\atopwithdelims ()i-1}(1-x)^{j-i-1} \left( -i+ jx \right) . \end{aligned}$$
(34)

Using (34), since \((i-1)!{j-1\atopwithdelims ()i-1} =i! {j-1\atopwithdelims ()i}\frac{1}{j-i}\), we have

$$\begin{aligned} (g_{j-1}^n )^{(i)} (x) = (-1)^{i} i ! \frac{1}{j-i}{j-1\atopwithdelims ()i}(1-x)^{j-i-1} ( -i+ jx), \end{aligned}$$

for \(j=i+1,\ldots ,n\), and consequently (33) follows. \(\square \)

Example 1

Let us illustrate the bidiagonal factorization (27) of \( W( g_0,\ldots ,g_n)(x)\) with a simple example. For \(n=2\), the bidiagonal factorization (27) of the Wronskian matrix of the geometric basis \((x, x(1-x) ,x(1-x)^2)\) at \(x\in {\mathbb {R}}\) is

$$\begin{aligned}&W(x, x(1-x) ,x(1-x)^2) = \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1/x &{} 1 &{} 0 \\ 0 &{} 2/x &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} x &{} 0 &{} 0 \\ 0 &{} -x &{} 0 \\ 0 &{}0 &{} 2x \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 1-x &{} 0 \\ 0 &{} 1 &{} 1-x \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1-x \\ 0 &{}0 &{} 1 \\ \end{array}\right) . \end{aligned}$$

Let us observe that, from (8) and Theorem 3, the bidiagonal factorization (27) of \( W:=W( g_0,\ldots ,g_n)(x)\) can be represented by means of the \((n+1)\times (n+1)\) matrix \(BD(W)=(BD(W)_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} BD(W)_{i,j}:={\left\{ \begin{array}{ll} (i-1)/x, &{} \text {if } i=j+1, \\ (-1) ^{i-1} (i-1)! x, &{} \text {if } i=j, \\ 1-x, &{} \text {if } i<j, \\ 0, &{} \text {else}. \\ \end{array}\right. } \end{aligned}$$
(35)

Analyzing the sign of the entries in (35), from Theorem 1, we deduce that the matrix \(W( g_0,\ldots ,g_n)(x)\) is not TP for any \(x\in {\mathbb {R}}\). However, the following result shows that, using the bidiagonal decomposition (27), the solution of several algebraic problems related to these matrices can be obtained with HRA .

Corollary 2

Let \(W:=W(g_0,\ldots ,g_n)(x)\) be the Wronskian matrix of the geometric basis defined in (14) and J the diagonal matrix \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Then, for \(x\ge 1\),

$$\begin{aligned} W_{J} := W J \end{aligned}$$
(36)

is a TP matrix and its bidiagonal factorization (5) can be computed with HRA. Consequently, the computation of the singular values of W, of the matrix \( W^{-1}\) and the resolution of linear systems \( W c = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.

Proof

Using (27) and the fact that \(J^2\) is the identity matrix, we can write

$$\begin{aligned} W_J= L_{1} (D_nJ)(JU_{1,n}J) \cdots (JU_{n,n}J), \end{aligned}$$
(37)

which gives the bidiagonal factorization (5) of \(W_J\). Now, it can be easily checked that if \( x-1\ge 0\), the bidiagonal matrices \(L_1\), \(JU_{i,n}J\), \(i=1,\ldots ,n\), as well as the diagonal matrix \(D_nJ\) are TP. By Theorem 1, \(W_J\) is TP for \(x\ge 1\) and then the computation of its bidiagonal decomposition (5), the computation of its eigenvalues and singular values, the inverse matrix \(W_{J}^{-1}\) and the resolution of \(W_{J}c= b\), where \(b = (b_0, \ldots , b_n)^T\) has alternating signs can be performed with HRA (see Section 3 of [11]).

On the other hand, since J is a unitary matrix, the singular values of \(W_{J} \) coincide with those of W and so, their computation for \(x\ge 1\) can be performed with HRA. Similarly, taking into account that

$$\begin{aligned} W ^{-1}= JW_{ J}^{-1}, \end{aligned}$$

we can compute \(W ^{-1}\) accurately. Finally, if we have a linear system of equations \(W c = b\), where the elements of \(b = (b_1, \ldots , b_n)^T\) have alternating signs, we can solve with HRA the system \(W_{J}d = b\) and then obtain \(c=Jd\).\(\square \)

Section 7 will show that the resolution of algebraic problems with \(W_J\), and consequently with W, can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem.

5 Total positivity and factorizations of Gram matrices of Poisson bases

The Poisson distribution is useful when modeling the number of times an event occurs in an interval of time or space. An event can occur \(k=0, 1, 2,\ldots \) times in an interval. If the rate parameter, that is, the average number of events in an interval is designated by t, then the probability of observing k events in an interval is given by

$$\begin{aligned} {\displaystyle P(k{\text { events in interval}}) ={\frac{t^{k}}{k!}}}e^{-t}. \end{aligned}$$

The Poisson basis functions

$$\begin{aligned} P_k(x):= \frac{x^{k}}{k!} e^{-x}, \quad k\in {\mathbb {N}}, \end{aligned}$$
(38)

are the limit as n tends to infinity of the Bernstein basis of degree n over the interval [0, n], that is,

$$\begin{aligned} P_k(x)=\lim _{n\rightarrow \infty }B^n_k\left( x/n \right) , \quad B^n_k\left( x \right) ={n\atopwithdelims ()k}x^k(1-x)^{n-k},\quad x\in [0,1], \end{aligned}$$

and they also play a useful role in CAGD (see [20]).

In Sect. 4 of [14], it is proved that the collocation matrix at positive values \(x_1<\cdots <x_{n+1}\) of a basis of Poisson functions \((P_0,\ldots ,P_n)\) is STP on \((0,\infty )\). Furthermore, the bidiagonal factorization of the collocation matrix \((P_{j-1}(x_{i-1} ))_{1\le i,j\le n+1}\) is deduced by taking into account that each basis function \(P_k(x)\) can be obtained by scaling the polynomials \(x^k/k!\) with the positive function \(\varphi (x)=e^{-x}\), \(k=0,\ldots ,n\). Using this factorization, accurate algebraic computations with collocation matrices of Poisson bases are achieved.

The Poisson basis functions belong to the vector space \(L_2 (0, \infty )\) of square integrable functions, which is a Hilbert space under the inner product

$$\begin{aligned} \langle f,g\rangle := \int _{0}^\infty f(t)g(t)\, dt. \end{aligned}$$
(39)

The Gram matrix of the Poisson basis \((e^{-x}, xe^{ - x},\ldots , x^ne^{-x}/n!)\) is a symmetric matrix \(G =\left( g_{i,j}\right) _{1\le i,j\le n+1}\) with

$$\begin{aligned} g_{i,j}:= \frac{1}{(i-1)! (j-1)!} \int _{0}^\infty t^{i+j-2} e^{-2 t} \, dt= \frac{1}{2 ^{i+j-1} } \frac{(i+j-2)! }{ (i-1)! (j-1)! } ,\quad 1\le i,j\le n+1. \end{aligned}$$
(40)

In the following result, the multipliers and the diagonal pivots of the NE of the Gram matrix G described in (40) are provided. Furthermore, it is proved that G is a STP matrix.

Theorem 4

The \((n+1)\times (n+1)\) Gram matrix G described by (40) is STP. Moreover, G admits a factorization of the form (5) such that

$$\begin{aligned} G =F_{n,n}F_{n-1,n}\cdots F_{1,n} D_n G_{1,n} \cdots G_{n-1,n } G_{n,n}, \end{aligned}$$
(41)

where \(F_{i,n}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the lower and upper triangular bidiagonal matrices given by (6) and \(D_n=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \). The entries \(m_{i,j}, \widetilde{m}_{i,j}\) and \(p_{i,i}\) are given by

$$\begin{aligned} m_{i,j}= & {} 1/2 ,\quad \widetilde{m}_{i,j} = {m}_{i,j},\quad 1\le j<i\le n+1, \end{aligned}$$
(42)
$$\begin{aligned} p_{1,1}= & {} 1/2,\quad p_{i+1,i+1}= p_{i,i}/4, \quad 1\le i\le n. \end{aligned}$$
(43)

Proof

Let \(G^{(1)}:=G\) and \(G ^{(k)}:=(g_{ij}^{(k)})_{1\le i,j \le n+1}\), \(k=2,\ldots , n+1\), be the matrices obtained after \(k-1\) steps of the NE process of the Gram matrix G. Let us see, by induction on k, that

$$\begin{aligned} g_{i,j}^{(k)}= \frac{ (i+j-k-1)! }{ 2 ^{ i+j-1} (i-1)! (j-k)! },\quad 1\le j,i\le n+1. \end{aligned}$$
(44)

For \(k=1\), we have \(g_{i,j}^{(1)}= \frac{(i+j-2)! }{ 2 ^{ i+j-1} (i-1)! (j-1)! }\) and (44) holds. Let us now suppose that (44) holds for some \(k\in \{1,\ldots ,n-1 \}\). Then it can be easily checked that

$$\begin{aligned} \frac{g_{i,k}^{(k)}}{g_{i-1,k}^{(k)}}= \frac{ (i -1)! 2 ^{ i+k-2} (i-2)! }{ 2 ^{ i+k-1} (i-1)! (i-2)! } = \frac{ 1 }{2 }, \quad i=k+1,\ldots ,n+1. \end{aligned}$$

Since \(g_{i,j}^{(k+1)}=g_{i,j}^{(k)}-\frac{g_{i,k}^{(k)}}{g_{i-1,k}^{(k)}}g_{i-1,j}^{(k)}\), we have

$$\begin{aligned} g_{i,j}^{(k+1)}=g_{i,j}^{(k)}- \frac{ 1 }{2 } g_{i-1,j}^{(k)}= \frac{ (i+j-k-2)! }{ 2 ^{ i+j-1} (i-2)! (j-k)! } \left( \frac{ i+j-k-1}{i-1} -1\right) = \frac{ (i+j-k-2)! }{ 2 ^{ i+j-1} (i-1)! (j-k-1)! } , \end{aligned}$$

for \( 1\le j,i\le n+1\) and formula (44) also holds for \(k+1\).

Now, by (3) and (44), we can easily deduce that the pivots \(p_{i,j}\) of the NE of G satisfy

$$\begin{aligned} p_{i,j}&=g_{i,j}^{(j)}= \frac{ 1}{2 ^{ i+j-1}} , \quad 1\le j<i\le n+1, \end{aligned}$$
(45)

and, for the particular case \(i=j\), \(p_{i,i}= { 1 }/{2 ^{2i-1}}\) for \(i=1,\ldots ,n\). Then, \(p_{1,1}=1/2\) and \( p_{i+1,i+1}/p_{i,i}=1/4 \) and so, (43) holds. By formula (45), \(p_{i,j}>0\) and so, the NE can be performed without row exchanges.

Finally, using (4) and (45), the multipliers \(m_{i,j}\) of the NE of G can be written as

$$\begin{aligned} m_{i,j}&=\frac{ p_{i,j}}{p_{i-1,j} } = 1/2 , \quad 1\le j<i\le n+1. \end{aligned}$$
(46)

Taking into account that G is a symmetric matrix, we conclude that \({\widetilde{m}}_{i,j}=m_{i,j}\), \(1\le j<i\le n+1\) and (42) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).\(\square \)

As a direct consequence of Theorem  4 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the Poisson basis \((e^{-x}, xe^{- x},\ldots , x^ne^{-x}/n!)\) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following corollary.

Corollary 3

The bidiagonal factorization (5) of the Gram matrix G described by (40), corresponding to the Poison basis \((e^{-x}, xe^{- x},\ldots , x^ne^{-x}/n!)\) can be computed with HRA. Consequently, the computation of the eigenvalues and singular values of G, the computation of the matrix \(G^{-1}\), as well as the resolution of linear systems \(Gc = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.

Section 7 illustrates the accurate results obtained when solving algebraic problems with the Gram matrix G, using the bidiagonal factorization (5) provided by Theorem 4 and the functions in the library TNTool of [12].

6 Bidiagonal factorization of Wronskian matrices of Poisson bases

In this section we are going to consider Wronskian matrices of Poisson bases and deduce their bidiagonal factorization (5). Using this factorization and the algorithms in [12], many algebraic problems related to these matrices will be solved with a great accuracy.

Let us start with some auxiliary results.

Lemma 2

Let \(n\in {\mathbb {N}}\) and \(L_{k,n}=(l_{i,j}^{(k,n)})_{1 \le i, j \le n+1}\), \(k=1,\ldots ,n\), the lower triangular bidiagonal matrix with unit diagonal entries, such that

$$\begin{aligned} l_{i,i-1}^{(k,n)}:=0,\quad i=2,\ldots , k,\quad l_{i,i-1}^{(k,n)}:=-1,\quad i=k+1,\ldots , n+1. \end{aligned}$$

Then \(L_n:=L_{n,n}L_{n-1,n}\cdots L_{1,n}\) is a lower triangular bidiagonal matrix with

$$\begin{aligned} L_n =(l_{i,j}^{(n)})_{1\le i,j\le n+1},\quad l_{i,j} := (-1) ^{i+j}{ i-1\atopwithdelims ()j-1} ,\quad 1\le j\le i\le n+1. \end{aligned}$$
(47)

Proof

Clearly, \(L_n\) is a lower triangular matrix since it is the product of lower triangular bidiagonal matrices. Now let us prove (47) by induction on n. For \(n=1\),

$$\begin{aligned} L_1:=L_{1,1}=\left( \begin{array}{rl} 1 &{} \\ -1 &{} 1 \end{array} \right) \end{aligned}$$

and (47) clearly holds. Let us now suppose that (47) holds for \(n\ge 1\). Then

$$\begin{aligned} L_{n+1}:= L_{n+1,n+1} L_{n,n+1} \cdots L_{1,n+1} = {\tilde{L}}_{n+1} L_{1,n+1} \end{aligned}$$

where \({\tilde{L}}_{n+1}:= L_{n+1,n+1} L_{n,n+1}\cdots L_{2,n+1}\) satisfies \({\tilde{L}}_{n+1}=( \tilde{l}_{i,j}^{(n+1)})_{1\le i,j\le n+2}\) with \( \tilde{l}_{i,1}=\delta _{i,1}\), \( \tilde{l}_{1,i}=\delta _{1,i}\), \(i=1,\ldots ,n+2\), and the submatrix of \({\tilde{L}}_{n+1}\) containing rows and columns of places \(\{2,\ldots ,n+2\} \), denoted by \({\tilde{L}}_{n+1}[2,\ldots , n+2]\), satisfies \({\tilde{L}}_{n+1}[2,\ldots , n+2]= L_{n,n} L_{n-1,n}\cdots L_{1,n}\).

Then we have that

$$\begin{aligned} \tilde{l}_{i,j}^{(n+1)}= (-1) ^{i+j}{ i-2\atopwithdelims ()j-2} ,\quad 2\le j\le i\le n+2. \end{aligned}$$

Now, taking into account that

$$\begin{aligned} L_{n+1}= {\tilde{L}}_{n+1} L_{1,n+1}= {\tilde{L}}_{n+1} \left( \begin{array}{cccccc} 1 &{}&{} \\ -1 &{} 1 &{} &{} \\ &{} \ddots &{} \ddots &{} \\ &{} &{}-1&{} 1 \end{array} \right) , \end{aligned}$$

and the fact that \({ i-2\atopwithdelims ()j-2}+{ i-2\atopwithdelims ()j-1}={ i-1\atopwithdelims ()j-1}\), we deduce that \(L_{n+1}=(l_{i,j}^{(n+1)})_{1\le i,j\le n+2}\) satisfies

$$\begin{aligned} l_{i,j}^{(n+1)} = \tilde{l}_{i,j}^{(n+1)}- \tilde{l}_{i,j+1}^{(n+1)}= & {} (-1)^{i+j} { i-2\atopwithdelims ()j-2}-(-1)^{i+j+1}{ i-2\atopwithdelims ()j-1} =(-1)^{i+j} { i-1\atopwithdelims ()j-1} \\ \end{aligned}$$

for \(1\le j\le i\le n+2\). \(\square \)

Lemma 3

For a given \(t \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), let \(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that

$$\begin{aligned} u_{i-1,i}^{(k,n)}:=0,\quad i=2,\ldots , k,\qquad u_{i-1,i}^{(k,n)}:=\frac{t}{i-1}, \quad i= k+1 ,\ldots , n+1. \end{aligned}$$

Then, \(U_n:=U_{1,n}\cdots U_{n,n}\), is an upper triangular matrix and

$$\begin{aligned} U_{n}= (u_{i,j}^{(n)})_{1 \le i, j \le n+1},\quad u_{i,j}^{(n)}= \frac{t^{j-i}}{(j-i)!}, \quad 1\le i \le j \le n+1. \end{aligned}$$

Proof

First, let us observe that given \({\widetilde{M}}_n=({\widetilde{m}}_{i,j} )_{1\le i,j\le n+1}\) and a nonsingular diagonal matrix \(D_n=\mathrm{diag}\left( d_{1} ,\ldots ,d_{n+1} \right) \),

$$\begin{aligned} D_n{\widetilde{M}}_n=M_n D_n, \end{aligned}$$

where \(M_n=(m_{i,j} )_{1\le i,j\le n+1}\) satisfies \(m_{i,j} = {\widetilde{m}}_{i,j} d_{i }/d_{j} \) for \(i,j=1,\ldots ,n+1\). Now, let us define \(D_n:=\mathrm{diag}\left( d_i \right) _{1\le i \le n+1}\), such that \(d_i= (i-1)!\), \(i=1,\ldots ,n+1\), and let \({\widetilde{U}}_{k,n}=({\widetilde{u}}_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that

$$\begin{aligned} {\widetilde{u}}_{i-1,i}^{(k,n)}:=0,\quad i=2,\ldots , k,\qquad {\widetilde{u}}_{i-1,i}^{(k,n)}:=t, \quad i= k+1 ,\ldots , n+1. \end{aligned}$$

Taking into account the fact that \( d_{i-1} /d_{i} = 1/(i-1) \), \(i=2,\ldots ,n+1\), we can write

$$\begin{aligned} D_{n} {\widetilde{U}}_{1,n} \cdots {\widetilde{U}}_{n,n} = U_{1,n} \cdots U_{n,n} D_{n}. \end{aligned}$$

Consequently, \( U_{1,n} \cdots U_{n,n}= D_{n} {\widetilde{U}}_{1,n} \cdots {\widetilde{U}}_{n,n} D_{n}^{-1}\) and, applying Lemma 1 to \({\widetilde{U}}_{1,n} \cdots {\widetilde{U}}_{n,n}\),

$$\begin{aligned} U_{1,n} \cdots U_{n,n}= (u_{i,j}^{(n)})_{1 \le i, j \le n+1},\quad u_{i,j}^{(n)}= \frac{ (i-1)! }{ (j-1)!} {j-1\atopwithdelims ()i-1} t^{j-i} = \frac{t^{j-i}}{(j-i)!}, \quad 1\le i \le j \le n+1. \end{aligned}$$

\(\square \)

Now, we can deduce a bidiagonal factorization of Wronskian matrices of Poisson bases.

Theorem 5

Let \(n\in {\mathbb {N}}\) and \((P_0,\ldots ,P_n)\) the basis (38) of Poisson functions. For a given \(x\in {\mathbb {R}}\), \(W:=W(P_0,\ldots ,P_n)(x)\) admits a factorization of the form

$$\begin{aligned} W= L_{n,n}L_{n-1,n}\cdots L_{1,n }D_n U_{1,n} \cdots U_{n-1,n} U_{n,n}, \end{aligned}$$
(48)

where \(L_{k,n}=(l_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the lower triangular bidiagonal matrices with unit diagonal entries, such that

$$\begin{aligned} l_{i,i-1}^{(k,n)}=0,\quad i=2,\ldots , k,\quad l_{i,i-1}^{(k,n)}=-1,\quad i=k+1,\ldots , n+1, \end{aligned}$$

\(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the upper triangular bidiagonal matrices with unit diagonal entries, such that

$$\begin{aligned} u_{i-1,i}^{(k,n)}=0,\quad i=2,\ldots , k,\quad u_{i-1,i}^{(k,n)} =\frac{x}{i-1},\quad i= k+1,\ldots ,n+1, \end{aligned}$$

and \(D_n\) is the diagonal matrix \(D_n=\mathrm{diag}\left( d_{1}^{(n)}, \ldots ,d_{n+1}^{(n)}\right) \) with \(d_i^{(n)}=e^{-x}\), \(i=1,\ldots ,n+1\).

Proof

By Lemma 2, \(L_n := L_ {n,n}L_{n-1,n}\cdots L_{1,n} \) is a lower triangular matrix and satisfies

$$\begin{aligned} L_n=(l_{i,j} ^{(n)})_{1\le i,j\le n+1},\quad l_{i,j}^{(n)}= (-1) ^{i+j}{ i-1\atopwithdelims ()j-1} ,\quad 1\le j\le i\le n+1. \end{aligned}$$

On the other hand, by Lemma 3, \(U_n:=U_{1,n} \cdots U_{n-1,n} U_{n,n}\) satisfies

$$\begin{aligned} U_n=(u_{i,j}^{(n)})_{1\le i,j\le n+1},\quad u_{i,j}^{(n)}=\frac{x^{j-i}}{(j-i)!},\quad 1\le i\le j\le n+1. \end{aligned}$$

Now, let us see that \(W=L_n D_n U_n\), that is,

$$\begin{aligned} P_{j-1}^{(i-1)} (x)= \left( \sum _{k=1}^{\min \{i,j\}} (-1)^{i+k} {i-1\atopwithdelims ()k-1}\frac{x^{j-k}}{(j-k)!}\right) e^{-x},\quad 1\le i, j\le n+1. \end{aligned}$$
(49)

We shall prove (49) by induction on i. For \(i=1\),

$$\begin{aligned} \sum _{k=1}^1 (-1)^{1+k}{1-1\atopwithdelims ()k-1}\frac{x^{j-k}}{(j-k)!}e^{-x}=\frac{x^{j-1}}{(j-1)!}e^{-x}= P_{j-1} (x), \quad j=1,\ldots ,n+1, \end{aligned}$$

and (49) holds. Now, let us assume that (49) holds for \(i\ge 1\). For any j such that \(1\le j\le i \), it can be checked that

$$\begin{aligned} \left( \sum _{k=1}^j (-1)^{i+k}{i-1\atopwithdelims ()k-1}\frac{x^{j-k}}{(j-k)!}e^{-x}\right) ' = \left( \sum _{k=1}^{j} (-1)^{i+k+1} c_k\frac{x^{j-k}}{(j-k)!}\right) e^{-x} \end{aligned}$$

where

$$\begin{aligned} c_1= {i-1\atopwithdelims ()0}=1 , \quad c_k = {i-1\atopwithdelims ()k-1}+{i-1\atopwithdelims ()k-2}={i \atopwithdelims ()k-1}, \quad k=2,\ldots ,j. \end{aligned}$$

In a similar way it can be checked that, for any \(j >i\), we have

$$\begin{aligned} \left( \sum _{k=1}^i (-1)^{i+k}{i-1\atopwithdelims ()k-1}\frac{x^{j-k}}{(j-k)!}e^{-x}\right) ' = \left( \sum _{k=1}^{i+1} (-1)^{i+k+1}c_k\frac{x^{j-k}}{(j-k)!}\right) e^{-x} \end{aligned}$$

where

$$\begin{aligned} c_1= {i-1\atopwithdelims ()0} , \quad c_k = {i-1\atopwithdelims ()k-1}+{i-1\atopwithdelims ()k-2}={i \atopwithdelims ()k-1},\quad k=2,\ldots ,i, \quad c_{i+1} := {i-1\atopwithdelims ()i-1}. \end{aligned}$$

Therefore,

$$\begin{aligned} P_{j-1}^{(i)} (x)= \left( \sum _{k=1}^{\min \{i+1,j\}} (-1)^{i+k+1} {i \atopwithdelims ()k-1}\frac{x^{j-k}}{(j-k)!}\right) e^{-x}, \end{aligned}$$

and (49) holds. \(\square \)

Example 2

Let us illustrate the bidiagonal factorization (48) of \(W(P_0,\ldots ,P_n)(x)\) with a simple example. For \(n=2\), the bidiagonal factorization of the Wronskian matrix of the basis \((e^{-x},xe^{-x},\frac{x^2}{2}e^{-x})\) at \(x\in {\mathbb {R}}\) is

$$\begin{aligned}&W(e^{-x},xe^{-x},\frac{x^2}{2}e^{-x}) = \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{}-1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ -1 &{} 1 &{} 0 \\ 0 &{}-1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} e^{-x} &{} 0 &{} 0 \\ 0 &{} e^{-x} &{} 0 \\ 0 &{}0 &{} e^{-x} \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} x &{} 0 \\ 0 &{} 1 &{} \frac{x}{2} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} \frac{x}{2} \\ 0 &{}0 &{} 1 \\ \end{array}\right) . \end{aligned}$$

Using (8) and Theorem 5, the bidiagonal factorization (48) of \(W:=W(P_0,\ldots ,P_n)(x)\) can be represented by means of the matrix \(BD(W)=(BD(W)_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} BD(W)_{i,j}:={\left\{ \begin{array}{ll} -1, &{} \text {if } i>j, \\ e^{-x}, &{} \text {if } i=j, \\ x/ (j-1), &{} \text {if } i<j. \\ \end{array}\right. } \end{aligned}$$
(50)

Analyzing the sign of the entries in (50), we can deduce from Theorem 1 that the Wronskian matrix of the Poisson basis is not TP for any \(x\in {\mathbb {R}}\). However, the following result shows that the solution of several algebraic problems related to these matrices can be obtained with HRA by using the bidiagonal decomposition (48).

Corollary 4

Let \(W:= W( P_ 0,\ldots , P_ n)(x)\) be the Wronskian matrix of the Poisson basis defined in (38) and J the diagonal matrix \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Then, for any \(x<0\),

$$\begin{aligned} W_J:=JWJ \end{aligned}$$
(51)

is an STP matrix and, for \(x=0\), \(W_J\) is TP. If, in addition, we know \(e^{-x}\) with HRA, then the bidiagonal factorization (5) of \(W_J\) can be computed with HRA. Consequently, the computation of the eigenvalues, singular values of W, the computation of the matrix \(W^{-1}\), as well as the resolution of linear systems \(Wc= b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have the same sign, can be performed with HRA.

Proof

Using Theorem 5 and the fact that \(J^2\) is the identity matrix, by (48) we can write

$$\begin{aligned} W_J=(JL_{n,n}J)\cdots (JL_{1,n}J)(JD_nJ)(JU_{1,n}J) \cdots (JU_{n,n}J), \end{aligned}$$
(52)

which gives its bidiagonal factorization (5). Now, it can be easily checked that the multipliers and diagonal pivots of the bidiagonal factorization (52) of \(W_J\) are positive if \(x<0\) and nonnegative for \(x=0\). Therefore, by Theorem 1, \(W_J\) is TP for \(x=0\) and, by Remark 1, \(W_J\) is STP for any \(x<0\), and its bidiagonal decomposition (52) can be computed with HRA for \(x\le 0\). This fact guarantees the computation with HRA of the eigenvalues and singular values of \(W_J\), the inverse matrix \(W_J^{-1}\) and the solution of the linear systems \(W_J c= d\), where \(d = (d_1, \ldots , d_{n+1})^T\) has alternating signs (see Section 3 of [11]).

Let us observe that, since J is a unitary matrix, the eigenvalues and singular values of W coincide with those of \(W_J\) and therefore, using the bidiagonal decomposition (52) of \(W_J\), their computation for \(x<0\) can be performed with HRA.

For the accurate computation of \(W^{-1}\), we can take into account that

$$\begin{aligned} W^{-1}=JW_J^{-1}J. \end{aligned}$$
(53)

Since, for \(x<0\), \(W_J^{-1}=({\tilde{w}}_{i,j})_{1\le i,j\le +1}\) can be computed with HRA and, by (53), the inverse of the Wronskian matrix W satisfies \(W^{-1}=((-1)^{i+j}{\tilde{w}}_{i,j})_{1\le i,j\le +1}\), we can also accurately compute \(W^{-1}\) by means of a suitable change of sign of the accurate computed entries of \(W_J^{-1}\).

Finally, if we have a linear system of equations \(W c = b\), where the elements of \(b = (b_1, \ldots , b_{n+1})^T\) have the same sign, we can compute with HRA the solution \(d\in {\mathbb {R}}^{n+1}\) of \(W_J d= Jb\) and, consequently, the solution \(c\in {\mathbb {R}}^{n+1}\) of the initial system since \(c=Jd\). \(\square \)

Observe that Corollary 4 requires the evaluation with HRA of \(e^{-x}\). Even when this condition does not hold, Sect. 7 will show that the resolution of algebraic problems with \(W_J\) can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem and so, similar to HRA. Consequently, as in the proof of Corollary 4, we can deduce that the computation of the eigenvalues, singular values of W, the matrix \(W^{-1}\), as well as the solution of linear systems \(Wx = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have the same signs, can be performed with an accuracy similar to HRA.

7 Numerical experiments

Let us suppose that A is an \((n+1)\times (n+1)\) nonsingular, TP matrix whose bidiagonal decomposition (5) is represented by means of the matrix BD(A) (see (8)). If BD(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTools in [12] take as input argument BD(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix \(A^{-1}\) and the solution of systems of linear equations \(Ax= b\), for vectors b whose entries have alternating signs. The computational cost of the function TNSolve is \(O(n^2)\) elementary operations. On the other hand, as it can be checked in page 303 of reference [19], the function TNInverseExpand has a computational cost of \(O(n^2)\) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (\(O(n^3)\)). The computational cost of the other mentioned functions is \(O(n^3)\).

For the geometric basis \((g^n_0,\ldots , g_n^n)\), \(n\in {\mathbb {N}}\), using Theorem  2, we have implemented a Matlab function that computes BD(G) for its Gram matrix G, described in (16). Furthermore, using Theorem 3 and Corollary 2, we have implemented a Matlab function for the computation of \(BD(W_J)\), where \(W_{J}\) is the scaled Wronskian matrix at \(x\ge 1\), \(W_{J} := W J\) given in (36).

For the Poisson basis \((P_0,\ldots ,P_n)\), \(n\in {\mathbb {N}}\), considering Theorem 4, we have also implemented a Matlab function, which computes BD(G) for its Gram matrix G described in (40). At last, using Theorem 5 and Corollary 4, we have implemented a Matlab function for the computation of \(BD(W_{J})\) for the matrix \(W_{J} := JWJ\), obtained from its Wronskian matrix W at \(x\le 0\) (see (51)).

Taking as argument the computed matrix representation of the corresponding bidiagonal decomposition (5), we have solved several algebraic problems with the Matlab functions of the library TNTools in [12]: TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve. Additionally, their solution has also been computed by using the Matlab commands eig, svd, inv and the Matlab command \(\setminus \), respectively. To analyze the accuracy of the approximations, all the solutions have been also calculated in Mathematica using a 100-digit arithmetic. The values provided by Mathematica have been considered as the exact solution of the considered algebraic problem in order to compute the relative errors.

Observe that, in all cases, the computational complexity in the computation of the entries \(m_{i,j}\), \({\tilde{m}}_{i,j}\), \(1\le j<i\le n+1\), is \(O(n^2)\) and in the computation of \(p_{i,i}\), \(1\le i\le n+1\), is O(n).

In the numerical experimentation, we have considered Gram and Wronskian matrices corresponding to different \((n+1)\)-dimensional geometric and Poisson bases with dimensions \(n+1=5, 10, 15, 20\). In order to analyze the accuracy in the computations with the Wronskian matrices, we have considered several \(x\ge 1\) for geometric bases (see Corollary 2) and several \(x< 0\) for Poisson bases (see Corollary 4). We shall illustrate the obtained results with two particular cases: Wronskian matrices of the geometric bases at \(x=10\) and Wronskian matrices of Poisson bases at \(x=-40\). The authors will provide upon request the software with the implementation of the above mentioned routines.

The 2-norm condition number of the considered Gram and Wronskian matrices has been obtained by means of the Mathematica command Norm[A,2]\(\cdot \) Norm[Inverse[A],2] and it is shown in Tables  1 and 2, respectively. We can clearly observe that the condition numbers significantly increase with the dimension of the matrices. This fact explains that traditional methods do not obtain accurate solutions when solving the aforementioned algebraic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [12].

Table 1 Condition number of Gram matrices of geometric bases and Poisson bases
Table 2 Condition number of Wronskian matrices of geometric bases at \(x_{0}=10\) and Poisson bases at \(x_{0}=-\,40\)

Now, let us notice that Gram matrices of geometric and Poisson bases are STP and then, by Theorem 6.2 of [1], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.

The eigenvalues and singular values of the considered Gram and Wronskian matrices have been computed with the Matlab functions TNEigenValues and TNSingularValues, respectively, taking as argument the matrix representation (8) of the corresponding deduced bidiagonal decomposition (5). Additionally, they have also been obtained by using the Matlab commands eig and svd, respectively. The relative error e of each approximation have been computed as \(e:=|a-\tilde{a} |/|a|\), where a denotes the eigenvalue or singular value computed with Mathematica and \(\tilde{a}\) the eigenvalue or singular value computed with the Matlab functions.

In Tables 3, 4, 5 and 6 the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our method provides very accurate results in contrast to the not accurate results provided by the Matlab commands eig and svd.

Table 3 Relative errors when computing the lowest eigenvalue value of Gram matrices of geometric bases (left) and Poisson bases (right)
Table 4 Relative errors when computing the lowest eigenvalue of the Wronskian matrices of Poisson bases at \(x_0=-40\)
Table 5 Relative errors when computing the lowest singular value of Gram matrices of geometric bases (left) and Poisson bases (right)
Table 6 Relative errors when computing the lowest singular value of Wronskian matrices of geometric bases at \(x_0=10\) (left) and Poisson bases at \(x_0=-\,40\) (right)

On the other hand, two approximations to the inverse matrix of the considered Gram and Wronskian matrices have also been calculated with Matlab. To look over the errors we have compared both Matlab approximations \(\widetilde{A} ^{-1}\) with the inverse matrix \(A ^{-1}\) computed by Mathematica, taking into account the formula \(e=\Vert A ^{-1}-\widetilde{A} ^{-1} \Vert _2/\Vert A ^{-1}\Vert _2\) for the corresponding relative errors. The obtained relative errors are shown in Tables 7 and 8. Observe that the relative errors achieved through the bidiagonal decompositions obtained in this paper are much smaller than those obtained with the Matlab command inv.

Table 7 Relative errors when computing the inverse of Gram matrices of geometric bases (left) and Poisson bases (right)
Table 8 Relative errors when computing the inverse of Wronskian matrices of geometric bases at \(x_0=10\) (left) and Poisson bases at \(x_0=-40\) (right)

At last, given random nonnegative integer values \(d_i\), \(i=1,\ldots ,n+1\), we have considered linear systems \( G c = d \) with \( d =((-1)^{i+1}d_i)_{1\le i\le n+1}\), and linear systems \({W c }={ d }\) where, in the case of geometric bases, \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and, in the case of Poisson bases, \({ d }=(d_i)_{1\le i\le n+1}\). We have computed in Matlab two approximations of the vector solution. An approximation has been computed by using the proposed bidiagonal decomposition (5) with the function TNSolve, and the other using the Matlab command \(\setminus \). Then, we have computed in Mathematica the relative error of the computed approximation \(\tilde{c}\), taking into account the formula \(e=\Vert c-{\tilde{c}}\Vert _2/ \Vert c\Vert _2\) and considering the vector c provided by Mathematica as the exact solution.

In Tables  9 and 10, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed method preserves the accuracy, which does not considerably increases with the dimension of the system in contrast with the results obtained with the Matlab command \(\setminus \).

Table 9 Relative errors when solving \( G c = d \) with Gram matrices of geometric bases (left) and Poisson bases (right)
Table 10 Relative errors when solving \( W_J c =d\) with Wronskian matrices of geometric bases at \(x_0=10\) (left) and Poisson bases at \(x_0=-\,40\) (right)