Accurate computations with matrices related to bases {tieλt}

The total positivity of collocation, Wronskian and Gram matrices corresponding to bases of the form (eλt,teλt,…,tneλt) is analyzed. A bidiagonal decomposition providing the accurate numerical resolution of algebraic linear problems with these matrices is derived. The numerical experimentation confirms the accuracy of the proposed methods.


Introduction
In this paper we shall focus on (n + 1)-dimensional spaces of functions formed by solutions of differential equations. In particular, we are going to consider linearly independent systems of the form (e λt , te λt , . . . , t n e λt ), λ ∈ R.
( 1 ) very important in several applications of matrix theory, such as spectral theory, controllability and the Lyapunov stability theory. Many fundamental problems in interpolation and approximation require linear algebra computations related to collocation matrices of the considered bases. On the other hand, Wronskian matrices arise when solving Hermite interpolation problems, in particular Taylor interpolation problems. Despite the nice properties of bases (1), they are not orthogonal. To overcome this inconvenience, they can be transformed into an orthogonal basis by means of a transformation matrix, which is a Gram matrix. The inversion of this matrix is required when approximating, in the leastsquares sense, curves by linear combinations of control points and the basis functions. Clearly, this procedure will become more efficient if the matrix inverse is explicitly expressed.
Let us recall that the accurate resolution of linear algebraic problems with structured classes of matrices is receiving increasing attention in the recent years [1, 3-6, 14, 18-26, 29]. For totally positive matrices, it usually requires the explicit computation of a bidiagonal decomposition of them. In fact, if we achieve the computation of this factorization with high relative accuracy (HRA), we can apply the algorithms presented in [15][16][17] to solve with HRA algebraic problems such that the computation of the matrix inverse, the computation of the eigenvalues and singular values, or the resolution of some systems of linear equations associated with the matrix.
In this paper collocation, Wronskian and Gram matrices of the bases (1) are considered. Unfortunately, they become ill-conditioned as their dimension increases. This fact can produce substantial errors when numerically performing algebraic computations and consequently, the resolution with HRA of numerical algebraic problems when considering these matrices is an important issue in numerical linear algebra.
The layout of this paper is as follows. Section 2 summarizes some notations and auxiliary results. Section 3 focuses on the collocation matrices of bases (1). The strict total positivity of these matrices is proved and their bidiagonal decomposition is provided. In Section 4, the total positivity of Wronskian matrices of basis (1) is discussed and their bidiagonal decomposition is deduced. It is also shown that, for some cases, these Wronskian matrices are not TP. Nevertheless, they are shown to be closely related to totally positive matrices whose bidiagonal decomposition provides accurate results. Section 5 focuses on Gram matrices of bases (1). It is proved that these matrices are strictly totally positive and, using a Neville elimination procedure (see [10][11][12]), a bidiagonal factorization of them is deduced. By means of the proposed factorization, the resolution of the above-mentioned algebraic problems can be achieved with HRA and consequently, the accuracy of the obtained solutions does not considerably decrease with the dimension of the matrix, as happens with the traditional methods. Finally, Section 6 presents numerical experiments confirming the accuracy of the proposed methods for the computation of eigenvalues, singular values, inverses or the solution of some linear systems related to collocation, Wronskian and Gram matrices of the considered bases.

Notations and auxiliary results
Let us recall that a matrix is totally positive: TP (respectively, strictly totally positive: STP) if all its minors are nonnegative (respectively, positive). Several applications of these matrices can be found in [2,8,28].
The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [10][11][12]). Given an (n + 1) × (n + 1), nonsingular matrix A = (a i,j ) 1≤i,j ≤n+1 , the NE process calculates a sequence of matrices so that the entries of A (k+1) below the main diagonal in the k first columns, 1 ≤ k ≤ n, are zeros and so, A (n+1) is upper triangular. In each step of the NE procedure, the matrix The (i, j ) pivot of the NE process of the matrix A is and, in particular, we say that p i,i is the ith diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure. The (i, j ) multiplier of the NE process of the matrix A is NE is a nice tool to deduce that a given matrix is STP, as shown in this characterization derived from Theorem 4.1, Corollary 5.5 of [10] and the arguments of p. 116 of [12].

Theorem 1 A given nonsingular matrix A is STP (resp., TP) if and only if the Neville elimination of
A and A T can be performed without row exchanges, all the multipliers of the Neville elimination of A and A T are positive (resp., nonnegative), and the diagonal pivots of the Neville elimination of A are all positive.
By Theorem 4.2 and the arguments of p.116 of [12], a nonsingular TP matrix A = (a i,j ) 1≤i,j ≤n+1 admits a factorization of the form where F i and G i , i = 1, . . . , n, are the TP, lower and upper triangular bidiagonal matrices given by and D = diag p 1,1 , p 2,2 , . . . , p n+1,n+1 has positive diagonal entries. The diagonal entries p i,i of D are the positive diagonal pivots of the Neville elimination of A and the elements m i,j and m i,j are the multipliers of the Neville elimination of A and A T , respectively. If, in addition, the entries m ij , m ij satisfy then the decomposition (6) is unique.
In [15], the bidiagonal factorization (6) of an (n + 1) × (n + 1) nonsingular and TP matrix A is represented by defining a matrix BD(A) = (BD(A) i,j ) 1≤i,j ≤n+1 such that Remark 1 Using the results in [10][11][12], given the bidiagonal factorization (6) of a nonsingular TP matrix A, a bidiagonal decomposition of A −1 can be computed as where Remark 2 Let us also observe that if a nonsingular TP matrix A is symmetric, then G i = F T i , i = 1, . . . , n, and consequently, the bidiagonal decomposition (6) satisfies where the matrices F i , i = 1, . . . , n, are the lower triangular bidiagonal matrices described in (7), whose off-diagonal entries coincide with the multipliers of the Neville elimination of A and D is the diagonal matrix with the pivots of the Neville elimination of A.
Let us also recall that a real value t is computed with high relative accuracy (HRA) whenever the computed valuet satisfies where u is the unit round-off and K > 0 is a constant, which is independent of the arithmetic precision. Clearly, HRA implies a great accuracy since the relative errors in the computations have the same order as the machine precision. A sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, sometimes denoted as NIC condition, which is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [7,15]). If the bidiagonal factorization (6) of a nonsingular and TP matrix A can be computed with HRA, the computation of its eigenvalues and singular values, the computation of A −1 and even the resolution of systems of linear equations Ax = b, for vectors b with alternating signs, can be also computed with HRA using the algorithms provided in [16].
Let (u 0 , . . . , u n ) be a basis of a space U of functions defined on I ⊆ R. Given a sequence of parameters t 1 < · · · < t n+1 on I , the corresponding collocation matrix is defined by Many fundamental problems in interpolation and approximation require linear algebra computations related to collocation matrices. In fact, they appear when imposing interpolation conditions for a given basis. If the space U is formed by n-times continuously differentiable functions and t ∈ I , the Wronskian matrix at t is defined by: where u (i) (t), i ≤ n, denotes the ith derivative of u at t. As usual, we shall use u (t) to denote the first derivative of u at t. Now, let us suppose that U is a Hilbert space of functions on the interval [0, T ], T ≤ +∞, under a given inner product defined for any u, v ∈ U . Then, given linearly independent functions v 0 , . . . , v n in U , the corresponding Gram matrix is the symmetric matrix described by: In the following sections we shall focus on (n + 1)-dimensional spaces of functions formed by solutions of differential equations. In particular, we are going to consider bases of the form (e λt , te λt , . . . , t n e λt ), for a given λ ∈ R.
It is well known that the basis (13) forms a fundamental solution set of the differential equation ψ(D)y(t) = 0 where ψ(t) = (t − λ) n and D := d dt (·). In [13] the applicability of its Wronskian and Gram matrices in spectral theory, controllability and Lyapunov stability theory is illustrated.
In the following sections we shall analyze the total positivity and obtain the bidiagonal factorization (6) of collocation, Wronskian and Gram matrices of the bases (13). For all considered cases, we are going to achieve algebraic computations with an accuracy similar to HRA.

Total positivity and factorizations of collocation matrices of bases {t i e λt }
First, let us observe that the basis (13) for λ = 0 is the monomial basis (p 0 , . . . , p n ) of the space U = P n of polynomials of degree less than or equal to n, with It is well known that (p 0 , . . . , p n ) is STP on (0, +∞) and so, given 0 < t 1 < · · · < t n+1 , the corresponding collocation matrix , is STP (see Section 3 of [15]). V is a Vandermonde matrix and has relevant applications in interpolation and numerical quadrature (see [9,27]). As for BD(V ) we have (see [15] or Theorem 3 of [18]). Now, let us suppose that (u 0 , . . . , u n ) is a system of functions defined on I = [a, b] and a < t 1 < · · · < t n+1 < b is a sequence of nodes such that the corresponding collocation matrix is STP. Let be the bidiagonal factorization (6) such that F i and G i , i = 1, . . . , n, are the lower and upper triangular bidiagonal matrices of the form (7) and D is a diagonal matrix with positive diagonal entries. Given a positive real function ϕ : [a, b] → R and d 0 , . . . , d n positive real values, Theorem 1 of [19] proves that the collocation matrix at a < t 1 < · · · < t n+1 < b of the system ( u 0 , . . . , u n ) defined by is also STP. Moreover, its bidiagonal factorization (6) can be obtained from (17). In fact, the collocation matrix where F i and G i , i = 1, . . . , n, are the lower and upper bidiagonal matrices of the form and D = diag q 1,1 , . . . , q n+1,n+1 . The entries r i,j , r i,j and q i,i are given by where m i,j , m i,j and p i,i are the entries of the matrices of the bidiagonal factorization (17) of the collocation matrix A in (16). Now, as a direct consequence of Theorem 1 of [19] and taking into account the bidiagonal decomposition of the Vandermonde matrices described in (15), we can state the following result providing the bidiagonal factorization (6) of collocation matrices of bases (13) at positive values. The strict total positivity of the basis follows from Theorem 1, taking into account the positivity of the pivots and multipliers of the NE procedure.

Theorem 2
The basis (e λt , te λt , . . . , t n e λt ), λ ∈ R, is STP on the interval (0, +∞). Moreover, given positive values t 1 < · · · < t n+1 , the bidiagonal factorization (6) of the collocation matrix where D = diag q 1,1 , . . . , q n+1,n+1 and F i , G i , i = 1, . . . , n, are the lower and upper triangular bidiagonal matrices of the form (20) with Let us notice that, from Theorem 2, the bidiagonal factorization (6) of the collocation matrix C in (22) can be represented by means of the (n + 1) Let us observe that the computation with HRA of the bidiagonal decomposition (23) should require the evaluation with HRA of the exponential function involved in (25). Although this cannot be guaranteed, Section 6 will show that accurate algebraic computations with the collocation matrices associated with these non-polynomial bases can be performed.

Total positivity and factorizations of Wronskian matrices of bases {t i e λt }
For the case λ = 0, Corollary 1 of [20] provides the bidiagonal factorization (6) of the Wronskian matrix of the monomial basis W := W (p 0 , . . . , p n )(t), which can be computed with HRA in O(n 2 ) time. Then, for BD(W), we have Using the mentioned result, the following auxiliary result can be easily deduced.
Lemma 1 For a given t ∈ R and n ∈ N, let U k = (u (k) i,j ) 1≤j,i≤n+1 , k = 1, . . . , n, be the (n + 1) × (n + 1) upper triangular bidiagonal matrix with unit diagonal entries, such that Then, U := U 1 · · · U n , is an upper triangular matrix and Proof U is an upper triangular matrix since it is the product of upper triangular bidiagonal matrices. Using formula (11) of [20], Now we provide the bidiagonal factorization (6) of the Wronskian matrices of bases (13). By means of the following result the total positivity of these matrices will be easily analyzed.

Theorem 3
The Wronskian matrix W of the system (e λt , te λt , . . . , t n e λt ), λ ∈ R, at t ∈ R admits a factorization of the form where . . , n, are the lower triangular bidiagonal matrices with unit diagonal entries, such that . . , n, are the upper triangular bidiagonal matrices with unit diagonal entries, such that Proof Using Lemma 1, we deduce that the product L T 1 · · · L T n is an upper triangular matrix whose entries are given by (28) and therefore L := L n · · · L 1 is a lower triangular matrix such that On the other hand, using again Lemma 1, we have that U := U 1 · · · U n is an upper triangular matrix satisfying In order to prove the result, let us see that W = LDU , that is, We shall prove (30) by induction on i. For i = 1, Now, taking into account the following identity For any j such that i < j ≤ n + 1, following a similar reasoning and using (31) and the following identity Therefore, from (32) and (33), we can write and conclude that (30) holds for i + 1.
Let us observe that, from (9) and Theorem 3, the bidiagonal factorization (6) of the Wronskian matrix of bases (13) can be represented by means of the (n + 1) if i < j. The bidiagonal factorization of the Wronskian matrix of (e −t , Analyzing the sign of the entries of (34), we can deduce the following result on the total positivity of the Wronskian matrix of the basis (e λt , te λt , . . . , t n e λt ). Corollary 1 Let W be the Wronskian matrix of (e λt , te λt , . . . , t n e λt ) at a given t ∈ R and J : is TP at t = 0 and STP at t < 0.
Proof For λ ≥ 0, i) can be directly deduced taking into account that the multipliers and diagonal pivots of the NE of W and W T are positive if t > 0 and nonnegative for t = 0. Then i) follows from Theorem 1.
For λ < 0, using again (29) and taking into account that J 2 is the identity matrix, we can write which gives the bidiagonal factorization (6) of W J . Now, it can be easily checked that the multipliers and diagonal pivots of the NE of W J and W T J are positive for t < 0 and nonnegative for t = 0 and ii) again follows from Theorem 1. Now, let us focus on the case λ < 0 and t ≤ 0. Since J is a unitary matrix, the eigenvalues and singular values of W := W (e λt , te λt , . . . , t n e λt ) coincide with those of W J := J W (e λt , te λt , . . . , t n e λt )J and therefore, their computation can be performed with HRA. Besides, for the accurate computation of W −1 , we can take into account that Since, W −1 J = (w i,j ) 1≤i,j ≤+1 can be computed with HRA for λ < 0 and t ≤ 0 and, by (37), Let us observe that the evaluation with HRA of the exponential value e λt for λ = 0 and t = 0 can not be guaranteed and consequently, we cannot assure the computation with HRA of the bidiagonal factorization (6) of the TP Wronskian matrix of (e λt , te λt , . . . , t n e λt ) for λ ≥ 0 and t ≥ 0, nor the computation with HRA of the bidiagonal factorization (6) of the TP matrix W J in (35) for λ < 0 and t ≤ 0. However, Section 6 will show that the resolution of algebraic problems with these matrices can be performed through the proposed bidiagonal factorizations with high accuracy.

Total positivity and factorizations of Gram matrices of bases {t i e λt }
As in previous sections, let us first consider the case λ = 0. It is well known that the polynomial space P n is a Hilbert space under the inner product The Gram matrix corresponding to the monomial basis (p 0 , . . . , p n ) is: The matrix H := G(p 0 , . . . , p n ) is a Hilbert matrix and so, a classical example of notoriously ill-conditioned STP matrix (cf. [15,16]). However, using formulae ij ) 1≤i,j ≤n+1 , k = 2, . . . , n + 1, be the matrices obtained after k − 1 steps of the NE procedure for H . Now, by induction on k, we shall see that It can be easily checked that and (42) follows for k = 2. If (42) holds for some k ∈ {2, . . . , n}, we have , i = k + 1, . . . , n + 1.

Taking into account that
i−1,j and the identity Finally, taking into account that and (42) holds for k + 1. Now, by (4) and (42), the pivots of the NE of H satisfy For the particular case i = j , we have and the recurrence (41) readily follows. Let us observe that, since the pivots of the NE of H are nonzero, this elimination can be performed without row exchanges.
Finally, using (5) and (42), the multipliers m i,j can be described as Since H is symmetric, using Remark 2, we deduce that m i,j = m i,j .
For λ < 0, the basis functions t i e λt , i = 0, . . . , n, belong to the vector space L 2 [0, +∞] of square integrable functions, which is a Hilbert space under the inner product The Gram matrix of (e λt , te λt , . . . , t n e λt ) is the symmetric matrix G = g i,j 1≤i,j ≤n+1 with The following result provides the multipliers and the diagonal pivots of the Neville elimination of the Gram matrix G described in (47) and proves that for λ < 0 this matrix is STP.
Theorem 5 Let G be the Gram matrix of the basis (e λt , te λt , . . . , t n e λt ), defined by (47) for λ < 0. Then the multipliers m i,j and diagonal pivots p i,i of the Neville elimination of G are given by Moreover, G is STP and its bidiagonal factorization (6)  ij ) 1≤i,j ≤n+1 , k = 2, . . . , n + 1, be the matrices obtained after k − 1 steps of the Neville elimination of G. First, let us see by induction on k that For k = 2, taking into account that g (2) i,j = g i,j − g i,1 g i−1,1 g i−1,j and the fact that it can be easily checked the following equality g (2) i, Therefore formula (50) holds for k = 2. Let us now suppose that (50) holds for some k ∈ {2, . . . , n} and then we have and formula (50) also holds for k + 1. Now, by (4) and (50), we can easily deduce that the pivots p i,j of the Neville elimination of G satisfy For the particular case i = j , 2i−1 and (49) clearly holds. Let us observe that, by formula (51), the pivots of the Neville elimination of G are nonzero and so, this elimination can be performed without row exchanges.
Finally, using (5) and (51), the multipliers m i,j can be written as Taking into account that G is a symmetric matrix, by Remark 2, we conclude that m i,j = m i,j , 1 ≤ j < i ≤ n+1. Since λ < 0, it can be checked that the pivots and the multipliers of the NE are all positive and then, by Theorem 1, we conclude that G is STP. Taking into account that the bidiagonal factorization (6) of G can be computed with HRA because the positive multipliers and diagonal pivots only require products and quotients, the algebraic computations with G mentioned in the statement of this theorem can be performed with HRA with the algorithms provided in [16]. Now, from Theorem 5, BD(G) = (BD(G) i,j ) 1≤i,j ≤n+1 satisfies (53) Section 6 will show the accurate results obtained when solving algebraic problems with Gram matrices of bases (e λt , te λt , . . . , t n e λt ) for λ < 0, using the bidiagonal factorization (6) provided by Theorem 5 and the algorithms presented in [16,17].

Numerical experiments
Let us suppose that A is an (n+1)×(n+1) nonsingular, TP matrix, whose bidiagonal decomposition (6) is represented by means of the matrix BD(A) given in (9). If BD(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTool in [17] take as input argument BD(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix A −1 (using the algorithm presented in [25]) and the solution of systems of linear equations Ax = b, for vectors b whose entries have alternating signs.
The computational cost of the function TNSolve is O(n 2 ) elementary operations. On the other hand, as it can be checked on page 303 of reference [25], TNInverseExpand has a computational cost of O(n 2 ) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (O(n 3 )). The computational cost of the other mentioned functions is O(n 3 ).
We have implemented different Matlab functions for computing the bidiagonal decompositions (6) proposed in this paper for the matrices related to the basis (e λt , te λt , . . . , t n e λt ) in (13).
Firstly, using Theorem 2, we have implemented a Matlab function that computes BD(C), the bidiagonal decomposition (6) of the collocation matrix C of the basis (13) at positive values t 1 < · · · < t n+1 , for a given λ ∈ R.
Moreover, taking into account Theorem 3 and Corollary 1, we have also implemented two Matlab functions. One of them computes BD(W ), for the Wronskian matrix W of the basis (13) at t > 0 and λ ≥ 0 and the other one computes BD(W J ), for the matrix W J := J W J obtained from its Wronskian matrix W at t < 0 for a given λ < 0 (see (35)).
Finally, considering Theorem 5, we have implemented a Matlab function that computes BD(G), for the Gram matrix G of the basis (13) described in (47) for a given λ < 0.
Observe that, in all cases, the computational complexity in the computation of the entries m i,j ,m i,j , 1 ≤ j < i ≤ n + 1, is O(n 2 ) and in the computation of p i,i , 1 ≤ i ≤ n + 1, is O(n). Table 1 From left to right, condition number of collocation matrices at t i = i/(n + 1), i = 1, . . . , n + 1, Wronskian matrices at t = 2 and λ = 3, Wronskian matrices at t = −5 and λ = −4, and finally, Gram matrices at λ = −1 In the numerical experimentation, we have considered different (n + 1) × (n + 1) collocation, Wronskian and Gram matrices corresponding to bases (e λt , te λt , . . . , t n e λt ). Let us observe that the bidiagonal decomposition (6) of the considered collocation and Wronskian matrices is not computed with HRA and so, HRA is not preserved in the computation of the algebraic problems mentioned before. In contrast, let us notice that HRA is preserved in the computations with the considered Gram matrices. The numerical results illustrate the accuracy of the computations even when the bidiagonal factorization (6) is not computed with HRA. The authors will provide upon request the software with the implementation of the above-mentioned routines.
The 2-norm condition number of the considered matrices has been obtained by means of the Mathematica command Norm[A,2]· Norm[Inverse[A],2] and is shown in Table 1. We can clearly observe that the condition numbers significantly increase with the dimension of the matrices. This explains that traditional methods do not obtain accurate solutions when solving the aforementioned algebraic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [17].

Eigenvalues and singular values
In our first numerical example we have computed the eigenvalues and singular values of the considered collocation, Wronskian and Gram matrices. Let us notice that Gram matrices of bases (e λt , te λt , . . . , t n e λt ) are STP for λ < 0 and then, by Theorem 6.2 of [2], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.
We have computed the eigenvalues and singular values of the considered matrices with the following algorithms: In Tables 2 and 3, the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our methods provide very accurate results in contrast to the non accurate results provided by the Matlab commands eig and svd.

Inverse matrix
On the other hand, in our second experiment, we have computed the inverse matrix of the considered collocation, Wronskian and Gram matrices with the following algorithms: -Matlab's function TNInverseExpand with the corresponding matrix representation (9) of the bidiagonal decomposition (6) as argument. -Matlab's routine inv.
To look over the errors we have compared both Matlab approximations with the inverse matrix A −1 computed by Mathematica using 100-digit arithmetic, taking into account the formula e = A −1 − A −1 2 / A −1 2 for the corresponding relative error. The obtained relative errors are shown in Table 4. Observe that the relative errors achieved through the bidiagonal decompositions obtained in this paper are much smaller than those obtained with the Matlab command inv.

Linear systems
At last, in our third experiment, we have computed the solutions of the linear systems Cc = d, Wc = d and Gc = d where d = ((−1) i+1 d i ) 1≤i≤n+1 and d i , i = 1, . . . , n+ 1 are random nonnegative integer values.
We have computed the resolution of these systems of linear equations associated with the considered collocation, Wronskian and Gram matrices with the next algorithms: -Matlab's function TNSolve by using the matrix representation (9) of the proposed bidiagonal decompositions (6). -Matlab's command \.
The vector provided by Mathematica has been considered as the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlabc, taking into account the formula e = c −c 2 / c 2 .
In Table 5, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed methods preserve the accuracy, which does not considerably increase with the dimension of the system in contrast with the results obtained with the Matlab command \. Table 2 From left to right, relative errors when computing the lowest eigenvalue of collocation matrices at t i = i/(n  Table 5 From left to right, relative errors when solving Cc