Accurate computations with Gram and Wronskian matrices of geometric and Poisson bases

In this paper we deduce a bidiagonal decomposition of Gram and Wronskian matrices of geometric and Poisson bases. It is also proved that the Gram matrices of both bases are strictly totally positive, that is, all their minors are positive. The mentioned bidiagonal decompositions are used to achieve algebraic computations with high relative accuracy for Gram and Wronskian matrices of these bases. The provided numerical experiments illustrate the accuracy when computing the inverse matrix, the eigenvalues or singular values or the solutions of some linear systems, using the theoretical results.


Introduction
Many fundamental problems in interpolation and approximation give rise to interesting linear algebra computations related to Gram and Wronskian matrices. Unfortunately, these matrices are usually ill conditioned and consequently, the mentioned algebraic computations lose accuracy as their dimension increases.
Gram matrices arise in the computation of the best approximation onto a subspace of a space equipped with an inner product. Hilbert matrices are well-known notoriously illconditioned Gram matrices corresponding to monomial bases. In [11], accurate algebraic approximation in V , with respect to the norm defined in U , of a given u ∈ U is v = n i=0 c i+1 f i , where c = (c 1 . . . , c n+1 ) T is the solution of the linear system Gc = b and G = (G i, j ) 1≤i, j≤n+1 is the Gram matrix such that Given a basis ( f 0 , . . . , f n ) of a space of functions with n continuous derivatives at x ∈ R, its Wronskian matrix at x is We say that a matrix is totally positive (TP) if all its minors are nonnegative and strictly totally positive (STP) if all its minors are positive. In [1,5,21] interesting applications of TP matrices can be found.
The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [6][7][8]). Given an (n + 1) × (n + 1), nonsingular matrix A = (a i, j ) 1≤i, j≤n+1 , the NE process calculates a sequence of matrices so that the entries of A (k+1) below the main diagonal in the k first columns, 1 ≤ k ≤ n, are zeros and so, A (n+1) is an upper triangular matrix. The matrix is computed from A (k) = (a (k) i, j ) 1≤i, j≤n+1 by means of the following relations The (i, j) pivot of the NE process of the matrix A is and, in particular, we say that p i,i is the i-th diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure. The (i, j) multiplier of the NE process of the matrix A is NE is a nice tool to deduce that a given matrix is TP or STP, as shown in this characterization derived from Theorem 4.2 and the arguments of p. 116 of [8].

Theorem 1 A nonsingular matrix A = (a i, j ) 1≤i, j≤n+1 is TP if and only if it admits a factorization of the form
where F i and G i are the TP, lower and upper triangular bidiagonal matrices given by and D = diag p 1,1 , . . . , p n+1,n+1 has positive diagonal entries. The diagonal entries p i,i of D are the diagonal pivots of the NE process of A and the elements m i, j , m i, j are nonnegative and coincide with the multipliers (4) of the NE process of A and A T , respectively. If, in addition, the entries m i j , m i j satisfy then the decomposition (5) is unique.

Remark 2
Using the results in [6][7][8], given the bidiagonal factorization (5) of a nonsingular and TP matrix A, a bidiagonal decomposition of A −1 can be computed as where F i and G i , is the lower and upper bidiagonal matrix of the form (6), obtained when replacing the off-diagonal entries

Remark 3 If a matrix
A is nonsingular and TP, then A T is also a nonsingular and TP matrix. Furthermore, by Theorem 1, where F i and G i , is the lower and upper bidiagonal matrix in (6). Consequently, if A is symmetric and (7) is satisfied, taking into account the unicity of the factorization, we immediately deduce that G i = F T i , i = 1, . . . , n, and then we have where F i is the lower bidiagonal matrix in (6), whose off-diagonal entries coincide with the multipliers of the NE process of A and D is the diagonal matrix with the pivots.
We say that a real x is computed with high relative accuracy (HRA) whenever the computed valuex satisfies where u is the unit round-off and K > 0 is a constant independent of the arithmetic precision. Clearly, HRA implies that the relative errors in the computations have the same order as the machine precision. It is well known that a sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, that is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [4,10]). If the bidiagonal factorization (5) of a nonsingular and TP matrix A can be computed with HRA then, the computation of its eigenvalues and singular values, the computation of A −1 and even the resolution of Ax = b for vectors b with alternating signs can be also computed with HRA using the algorithms provided in [11].
Finally, let ( p 0 , . . . , p n ) be the monomial basis of the space P n of polynomials of degree less than or equal to n.
In [15], using this result, accurate computations with Wronskian matrices of monomial bases are achieved.
In the sequel we shall consider Gram and Wronskian matrices of geometric and Poisson bases. We will now deduce their bidiagonal decomposition (5) and see that algebraic computations with HRA can be achieved for all considered cases.

Total positivity and accurate computations with Gram matrices of geometric bases
The geometric distribution has many applications in population and econometric models. Let us recall that the probability of k failures up to obtain a success is given by where the probability of success is t ∈ [0, 1]. Then, for a given n ∈ N, we can define an (n + 1)-dimensional polynomial basis (g n 0 , . . . , g n n ), where In Sect. 4 of [14], it is proved that the collocation matrix at positive values 0 < x 1 < · · · < x n+1 < 1 of (g n 0 , . . . , g n n ) is STP. Furthermore, the bidiagonal factorization (5) of the collocation matrix (g n j−1 (x i−1 )) 1≤i, j≤n+1 is deduced by taking into account that each basis function g n k (x) can be obtained by scaling the polynomials (1 − x) k with the positive function ϕ(x) = x, k = 0, . . . , n. Using this factorization, HRA algebraic computations with collocation matrices of geometric bases (g n 0 , . . . , g n n ) are achieved. The geometric basis (g n 0 , . . . , g n n ) given in (14) belongs to the vector space L 2 [0, 1] of square integrable functions, which is a Hilbert space under the inner product The Gram matrix of (g n 0 , . . . , g n n ) is the symmetric matrix The following result proves that G is STP, providing the multipliers and the diagonal pivots of the NE of G.

Theorem 2
The (n + 1) × (n + 1) Gram matrix G described by (16) is STP. Moreover, G admits a factorization of the form (5) such that G = F n,n F n−1,n · · · F 1,n D n G 1,n · · · G n−1,n G n,n , where F i,n and G i,n , i = 1, . . . , n, are the lower and upper triangular bidiagonal matrices given by (6) and D = diag p 1,1 , . . . , p n+1,n+1 . The entries m i, j , m i, j and p i,i are given by . . , n + 1, be the computed matrices after k − 1 steps of the NE of G. Using induction on k, let us first see that For k = 1, G and (20) holds. Let us now suppose that (20) holds for some k ∈ {1, . . . , n − 1}. Then, it can be easily checked that Finally, taking into account the following equalities we can write and formula (20) also holds for k + 1. Now, by (3) and (20), we can easily deduce that the pivots p i, j of the NE of G satisfy and, for the particular case i = j, Clearly, for i = 1 in (24) we obtain p 1,1 = 1/3. Moreover, it can be checked that and (19) holds. Now, let us observe that, by formula (24), the pivots of the NE of G are positive and so, this elimination can be performed without row exchanges.
Finally, using (4) and (23), the multipliers m i, j can be written as Taking into account that G is a symmetric matrix, we conclude that m i, j = m i, j , 1 ≤ j < i ≤ n + 1 and (18) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).
As a direct consequence of Theorem 2 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the geometric basis (g n 0 , . . . , g n n ) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following result. (16), corresponding to the geometric basis (g n 0 , . . . , g n n ) in (14), can be computed with HRA. Consequently, the computation of the eigenvalues and singular values of G, the computation of the matrix G −1 , as well as the resolution of linear systems Gc = b, where the entries of b = (b 1 , . . . , b n+1 ) T have alternating signs, can be performed with HRA. Section 7 will show the accurate results obtained when solving algebraic problems with Gram matrices of geometric bases, using the bidiagonal factorization (5) provided by Theorem 2 and the functions in the library TNTool of [12].

Bidiagonal factorization of Wronskian matrices of geometric bases
In this section we are going to deduce a bidiagonal decomposition of the form (5) of the Wronskian matrix of geometric bases (14). We shall see that, using this factorization and the algorithms in [12], many algebraic problems related to these matrices can be solved with HRA.
Let us start with some auxiliary results.

Lemma 1 For a given t ∈ R and n
Then, U n := U 1,n · · · U n,n , is an upper triangular matrix and Proof Since the matrix U n is the product of upper triangular bidiagonal matrices, we deduce that U n is upper triangular. Now, using Proposition 1, we can deduce that equalities (26) are immediately obtained.
Theorem 3 Let (g n 0 , . . . , g n n ) be the (n + 1)-dimensional geometric basis defined in (14). The Wronskian matrix W := W (g 0 , . . . , g n )(x) at a given x ∈ R, x = 0, admits a factorization of the form where L 1 = (l (1,n) i, j ) 1≤ j,i≤n+1 is the lower triangular bidiagonal matrix with unit diagonal entries, such that . . , n, are the upper triangular bidiagonal matrices with unit diagonal entries, such that and D n is the diagonal matrix D n = diag d (31) On the other hand, using Lemma 1 with t := 1− x, we derive that U n := U 1,n · · · U n−1,n U n,n is the (n + 1) × (n + 1) upper triangular matrix described by In order to prove the result, taking into account (27), (31) and (32), we have to check that , equalities (33) readily follow for 1 ≤ j ≤ i and i = 1, . . . , n+1. For j > i, (33) can be proved by induction on i.
Let us observe that, from (8) and Theorem 3, the bidiagonal factorization (27) of W := W (g 0 , . . . , g n )(x) can be represented by means of the (n + 1) Analyzing the sign of the entries in (35), from Theorem 1, we deduce that the matrix W (g 0 , . . . , g n )(x) is not TP for any x ∈ R. However, the following result shows that, using the bidiagonal decomposition (27), the solution of several algebraic problems related to these matrices can be obtained with HRA .
be the Wronskian matrix of the geometric basis defined in (14) and J the diagonal matrix J := diag((−1) i−1 ) 1≤i≤n+1 . Then, for x ≥ 1, Proof Using (27) and the fact that J 2 is the identity matrix, we can write which gives the bidiagonal factorization (5)  Section 7 will show that the resolution of algebraic problems with W J , and consequently with W , can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem.

Total positivity and factorizations of Gram matrices of Poisson bases
The Poisson distribution is useful when modeling the number of times an event occurs in an interval of time or space. An event can occur k = 0, 1, 2, . . . times in an interval. If the rate parameter, that is, the average number of events in an interval is designated by t, then the probability of observing k events in an interval is given by The Poisson basis functions are the limit as n tends to infinity of the Bernstein basis of degree n over the interval [0, n], that is, and they also play a useful role in CAGD (see [20]). In Sect. 4 of [14], it is proved that the collocation matrix at positive values x 1 < · · · < x n+1 of a basis of Poisson functions (P 0 , . . . , P n ) is STP on (0, ∞). Furthermore, the bidiagonal factorization of the collocation matrix (P j−1 (x i−1 )) 1≤i, j≤n+1 is deduced by taking into account that each basis function P k (x) can be obtained by scaling the polynomials x k /k! with the positive function ϕ(x) = e −x , k = 0, . . . , n. Using this factorization, accurate algebraic computations with collocation matrices of Poisson bases are achieved.
The Poisson basis functions belong to the vector space L 2 (0, ∞) of square integrable functions, which is a Hilbert space under the inner product The Gram matrix of the Poisson basis (e −x , xe −x , . . . , x n e −x /n!) is a symmetric matrix G = g i, j 1≤i, j≤n+1 with In the following result, the multipliers and the diagonal pivots of the NE of the Gram matrix G described in (40) are provided. Furthermore, it is proved that G is a STP matrix.

Theorem 4
The (n + 1) × (n + 1) Gram matrix G described by (40) is STP. Moreover, G admits a factorization of the form (5) such that G = F n,n F n−1,n · · · F 1,n D n G 1,n · · · G n−1,n G n,n , where F i,n and G i,n , i = 1, . . . , n, are the lower and upper triangular bidiagonal matrices given by (6) and D n = diag p 1,1 , . . . , p n+1,n+1 . The entries m i, j , m i, j and p i,i are given by Proof Let G (1) := G and G (k) := (g (k) i j ) 1≤i, j≤n+1 , k = 2, . . . , n + 1, be the matrices obtained after k − 1 steps of the NE process of the Gram matrix G. Let us see, by induction on k, that For k = 1, we have g (1) i (44) holds. Let us now suppose that (44) holds for some k ∈ {1, . . . , n − 1}. Then it can be easily checked that for 1 ≤ j, i ≤ n + 1 and formula (44) also holds for k + 1. Now, by (3) and (44), we can easily deduce that the pivots p i, j of the NE of G satisfy and, for the particular case i = j, p i,i = 1/2 2i−1 for i = 1, . . . , n. Then, p 1,1 = 1/2 and p i+1,i+1 / p i,i = 1/4 and so, (43) holds. By formula (45), p i, j > 0 and so, the NE can be performed without row exchanges. Finally, using (4) and (45), the multipliers m i, j of the NE of G can be written as Taking into account that G is a symmetric matrix, we conclude that m i, j = m i, j , 1 ≤ j < i ≤ n + 1 and (42) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).
As a direct consequence of Theorem 4 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the Poisson basis (e −x , xe −x , . . . , x n e −x /n!) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following corollary.

Bidiagonal factorization of Wronskian matrices of Poisson bases
In this section we are going to consider Wronskian matrices of Poisson bases and deduce their bidiagonal factorization (5). Using this factorization and the algorithms in [12], many algebraic problems related to these matrices will be solved with a great accuracy.
Then we have that Now, taking into account that Lemma 3 For a given t ∈ R and n ∈ N, let U k,n = (u (k,n) i, j ) 1≤ j,i≤n+1 , k = 1, . . . , n, be the (n + 1) × (n + 1), upper triangular bidiagonal matrix with unit diagonal entries, such that Now, let us see that W = L n D n U n , that is, We shall prove (49) by induction on i.
In a similar way it can be checked that, for any j > i, we have Therefore, and (49) holds.

Example 2
Let us illustrate the bidiagonal factorization (48) of W (P 0 , . . . , P n )(x) with a simple example. For n = 2, the bidiagonal factorization of the Wronskian matrix of the basis Using (8) and Theorem 5, the bidiagonal factorization (48) of W := W (P 0 , . . . , P n )(x) can be represented by means of the matrix B D (W ) = (B D(W ) i, j ) 1≤i, j≤n+1 such that Analyzing the sign of the entries in (50), we can deduce from Theorem 1 that the Wronskian matrix of the Poisson basis is not TP for any x ∈ R. However, the following result shows that the solution of several algebraic problems related to these matrices can be obtained with HRA by using the bidiagonal decomposition (48).  = (b 1 , . . . , b n+1 ) T have the same sign, can be performed with HRA.
Proof Using Theorem 5 and the fact that J 2 is the identity matrix, by (48) we can write which gives its bidiagonal factorization (5). Now, it can be easily checked that the multipliers and diagonal pivots of the bidiagonal factorization (52) of W J are positive if x < 0 and nonnegative for x = 0. Therefore, by Theorem 1, W J is TP for x = 0 and, by Remark 1, W J is STP for any x < 0, and its bidiagonal decomposition (52) can be computed with HRA for x ≤ 0. This fact guarantees the computation with HRA of the eigenvalues and singular values of W J , the inverse matrix W −1 J and the solution of the linear systems W J c = d, where d = (d 1 , . . . , d n+1 ) T has alternating signs (see Section 3 of [11]).
Let us observe that, since J is a unitary matrix, the eigenvalues and singular values of W coincide with those of W J and therefore, using the bidiagonal decomposition (52) of W J , their computation for x < 0 can be performed with HRA.
For the accurate computation of W −1 , we can take into account that Since, for x < 0,  (b 1 , . . . , b n+1 ) T have the same sign, we can compute with HRA the solution d ∈ R n+1 of W J d = J b and, consequently, the solution c ∈ R n+1 of the initial system since c = J d.
Observe that Corollary 4 requires the evaluation with HRA of e −x . Even when this condition does not hold, Sect. 7 will show that the resolution of algebraic problems with W J can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem and so, similar to HRA. Consequently, as in the proof of Corollary 4, we can deduce that the computation of the eigenvalues, singular values of W , the matrix W −1 , as well as the solution of linear systems W x = b, where the entries of b = (b 1 , . . . , b n+1 ) T have the same signs, can be performed with an accuracy similar to HRA.

Numerical experiments
Let us suppose that A is an (n + 1) × (n + 1) nonsingular, TP matrix whose bidiagonal decomposition (5) is represented by means of the matrix B D(A) (see (8)). If B D(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTools in [12] take as input argument B D(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix A −1 and the solution of systems of linear equations Ax = b, for vectors b whose entries have alternating signs. The computational cost of the function TNSolve is O(n 2 ) elementary operations. On the other hand, as it can be checked in page 303 of reference [19], the function TNInverseExpand has a computational cost of O(n 2 ) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (O(n 3 )). The computational cost of the other mentioned functions is O(n 3 ).
For the geometric basis (g n 0 , . . . , g n n ), n ∈ N, using Theorem 2, we have implemented a Matlab function that computes B D(G) for its Gram matrix G, described in (16). Furthermore, using Theorem 3 and Corollary 2, we have implemented a Matlab function for the computation of B D(W J ), where W J is the scaled Wronskian matrix at x ≥ 1, W J := W J given in (36).
For the Poisson basis (P 0 , . . . , P n ), n ∈ N, considering Theorem 4, we have also implemented a Matlab function, which computes B D(G) for its Gram matrix G described in (40). At last, using Theorem 5 and Corollary 4, we have implemented a Matlab function for the computation of B D(W J ) for the matrix W J := J W J , obtained from its Wronskian matrix W at x ≤ 0 (see (51)).
Taking as argument the computed matrix representation of the corresponding bidiagonal decomposition (5), we have solved several algebraic problems with the Matlab functions of the library TNTools in [12]: TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve. Additionally, their solution has also been computed by using the Matlab commands eig, svd, inv and the Matlab command \, respectively. To analyze the accuracy of the approximations, all the solutions have been also calculated in Mathematica using a 100-digit arithmetic. The values provided by Mathematica have been considered as the exact solution of the considered algebraic problem in order to compute the relative errors.
Observe that, in all cases, the computational complexity in the computation of the entries m i, j ,m i, j , 1 ≤ j < i ≤ n + 1, is O(n 2 ) and in the computation of In the numerical experimentation, we have considered Gram and Wronskian matrices corresponding to different (n +1)-dimensional geometric and Poisson bases with dimensions n+1 = 5, 10, 15, 20. In order to analyze the accuracy in the computations with the Wronskian matrices, we have considered several x ≥ 1 for geometric bases (see Corollary 2) and several x < 0 for Poisson bases (see Corollary 4). We shall illustrate the obtained results with two particular cases: Wronskian matrices of the geometric bases at x = 10 and Wronskian matrices of Poisson bases at x = −40. The authors will provide upon request the software with the implementation of the above mentioned routines.
The 2-norm condition number of the considered Gram and Wronskian matrices has been obtained by means of the Mathematica command Norm[A,2]· Norm[Inverse[A],2] and it is shown in Tables 1 and 2, respectively. We can clearly observe that the condition numbers significantly increase with the dimension of the matrices. This fact explains that traditional methods do not obtain accurate solutions when solving the aforementioned alge-  braic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [12]. Now, let us notice that Gram matrices of geometric and Poisson bases are STP and then, by Theorem 6.2 of [1], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.
The eigenvalues and singular values of the considered Gram and Wronskian matrices have been computed with the Matlab functions TNEigenValues and TNSingularValues, respectively, taking as argument the matrix representation (8) of the corresponding deduced bidiagonal decomposition (5). Additionally, they have also been obtained by using the Matlab commands eig and svd, respectively. The relative error e of each approximation have been computed as e := |a −ã|/|a|, where a denotes the eigenvalue or singular value computed with Mathematica andã the eigenvalue or singular value computed with the Matlab functions.
In Tables 3, 4, 5 and 6 the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our method provides very accurate results in contrast to the not accurate results provided by the Matlab commands eig and svd.
On the other hand, two approximations to the inverse matrix of the considered Gram and Wronskian matrices have also been calculated with Matlab. To look over the errors we have compared both Matlab approximations A −1 with the inverse matrix A −1 computed by Mathematica, taking into account the formula e = A −1 − A −1 2 / A −1 2 for the corresponding relative errors. The obtained relative errors are shown in Tables 7 and 8. Observe that the relative errors achieved through the bidiagonal decompositions obtained in this paper are much smaller than those obtained with the Matlab command inv.       In Tables 9 and 10, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed method preserves the accuracy, which does not considerably increases with the dimension of the system in contrast with the results obtained with the Matlab command \. Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.