Abstract
In this paper we deduce a bidiagonal decomposition of Gram and Wronskian matrices of geometric and Poisson bases. It is also proved that the Gram matrices of both bases are strictly totally positive, that is, all their minors are positive. The mentioned bidiagonal decompositions are used to achieve algebraic computations with high relative accuracy for Gram and Wronskian matrices of these bases. The provided numerical experiments illustrate the accuracy when computing the inverse matrix, the eigenvalues or singular values or the solutions of some linear systems, using the theoretical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Many fundamental problems in interpolation and approximation give rise to interesting linear algebra computations related to Gram and Wronskian matrices. Unfortunately, these matrices are usually ill conditioned and consequently, the mentioned algebraic computations lose accuracy as their dimension increases.
Gram matrices arise in the computation of the best approximation onto a subspace of a space equipped with an inner product. Hilbert matrices are well-known notoriously ill-conditioned Gram matrices corresponding to monomial bases. In [11], accurate algebraic computations with Hilbert matrices are achieved by using an elegant representation of them as a product of nonnegative bidiagonal matrices. As far as the authors’ knowledge, up to now, computations with high relative accuracy (HRA) using Gram matrices of other bases have not been obtained.
Wronskian matrices arise in many applications, such as Hermite interpolation problems and, in particular, Taylor interpolation problems. In [15], the bidiagonal factorization of the Wronskian matrix of the monomial basis of the space of polynomials of a given degree and the bidiagonal factorization of the Wronskian matrix of the basis of exponential polynomials were obtained. Furthermore, in [16] a procedure to accurately compute the bidiagonal decomposition of collocation and Wronskian matrices of the wide family of Jacobi polynomials is proposed. The obtained results are used to get accurate computations using collocation and Wronskian matrices of well-known types of Jacobi polynomials.
The geometric distribution has applications in population and econometric models and the Poisson distribution is popular for modeling the number of times an event occurs in an interval of time or space. Associated to these distributions, the corresponding bases can be defined (see Sects. 3 and 5, respectively). In fact, the Poisson basis also plays a useful role in computer-aided geometric design (see [9, 20]). Collocation matrices of both bases were analyzed in [14], where it was proved their total positivity and their bidiagonal factorization was obtained with HRA. Using such bidiagonal factorization, the algorithms presented in [12] can be applied to solve with HRA algebraic problems. The bidiagonal factorization of the collocation matrices of other interesting bases can be found in [2, 3, 13, 17, 18].
In this paper, we deal with Gram and Wronskian matrices of the two types of bases mentioned in the previous paragraph. The total positivity of these matrices is analyzed. In contrast with their corresponding collocation and Gram matrices, we show that these Wronskian matrices are not totally positive. However, we relate them with other totally positive matrices, so that their associated bidiagonal factorizations can be used to provide accurate algorithms for the algebraic computations mentioned before.
We now describe the layout of the paper. In Sect. 2, we present basic notations and preliminary results on total positivity, Neville elimination, bidiagonal factorization, HRA and Wronskian matrices of monomial bases. In Sect. 3, we prove that the Gram matrix of geometric bases is strictly totally positive (STP) and a bidiagonal factorization for the resolution with HRA of related algebraic problems is proposed. The bidiagonal factorization of Wronskian matrices of geometric bases is also obtained in Sect. 4, where the methods to derive algorithms with HRA are also shown. Section 5 proves that Gram matrices of Poisson bases are STP. Furthermore, the bidiagonal factorization to derive accurate algorithms is deduced. In Sect. 6, Wronskian matrices of Poisson bases are considered and the method to derive accurate computations is given. Finally, Sect. 7 illustrates numerical experiments confirming the accuracy of the presented methods for the computation of eigenvalues, singular values, inverses or the solution of some linear systems related to Gram and Wronskian matrices of the considered bases. The complexity of the algorithms for computing the mentioned algebraic problems is comparable to that of the traditional LAPACK algorithms, which, as we shall see, deliver no such accuracy.
2 Notations and preliminary results
Let us recall that given a Hilbert space U under an inner product \(\langle \cdot , \cdot \rangle \) and an \((n+1)\)-dimensional subspace V generated by a basis \((f_0,\ldots , f_n)\), the computation of the best approximation in V, with respect to the norm defined in U, of a given \(u\in U\) is \(v=\sum _{i=0}^n c_{i+1}f_i\), where \(c = (c_1 \ldots , c_{n+1})^T\) is the solution of the linear system \(Gc = b\) and \(G=(G_{i,j})_{1\le i,j\le n+1}\) is the Gram matrix such that
and \(b=(b_{i })_{1\le i \le n+1}\) with \(b_i :=\langle f_{i-1}, u \rangle \).
Given a basis \((f_0, \ldots , f_n)\) of a space of functions with n continuous derivatives at \(x\in {\mathbb {R}}\), its Wronskian matrix at x is
where \(f^{(0)}(x):= f(x)\), \(f^{(1)}(x):=f'(x)\) and \(f^{(i)}(x)\), \(i\le n\), denotes the i-th derivative of f at x.
We say that a matrix is totally positive (TP) if all its minors are nonnegative and strictly totally positive (STP) if all its minors are positive. In [1, 5, 21] interesting applications of TP matrices can be found.
The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [6,7,8]). Given an \((n+1)\times (n+1)\), nonsingular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\), the NE process calculates a sequence of matrices
so that the entries of \(A^{(k+1)}\) below the main diagonal in the k first columns, \(1\le k\le n\), are zeros and so, \(A^{(n+1)} \) is an upper triangular matrix. The matrix \(A^{(k+1)} =(a_{i,j}^{(k+1)})_{1\le i,j\le n+1}\) is computed from \(A^{(k)}=(a_{i,j}^{(k)})_{1\le i,j\le n+1}\) by means of the following relations
The (i, j) pivot of the NE process of the matrix A is
and, in particular, we say that \(p_{i,i}\) is the i-th diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure.
The (i, j) multiplier of the NE process of the matrix A is
NE is a nice tool to deduce that a given matrix is TP or STP, as shown in this characterization derived from Theorem 4.2 and the arguments of p. 116 of [8].
Theorem 1
A nonsingular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) is TP if and only if it admits a factorization of the form
where \(F_i\) and \(G_i\) are the TP, lower and upper triangular bidiagonal matrices given by
and \(D=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \) has positive diagonal entries. The diagonal entries \(p_{i,i}\) of D are the diagonal pivots of the NE process of A and the elements \(m_{i,j}\), \(\widetilde{m}_{i,j}\) are nonnegative and coincide with the multipliers (4) of the NE process of A and \(A^T\), respectively. If, in addition, the entries \(m_{ij}\), \(\widetilde{m}_{ij}\) satisfy
then the decomposition (5) is unique.
In [10], the bidiagonal factorization (5) of an \((n+1)\times (n+1)\) nonsingular and TP matrix A is represented by means of a matrix \(BD(A)=( BD(A)_{i,j})_{1\le i,j\le n+1} \) such that
Remark 1
By Theorem 4.3 of [8], if \(m_{i,j}>0\), \(\widetilde{m}_{i,j}>0\), \(1\le j<i\le n+1\), and \(p_{i,i}>0\), \(1\le i\le n+1\), then A is STP.
Remark 2
Using the results in [6,7,8], given the bidiagonal factorization (5) of a nonsingular and TP matrix A, a bidiagonal decomposition of \(A^{-1}\) can be computed as
where \(\widetilde{F}_i\) and \(\widetilde{G}_i\), is the lower and upper bidiagonal matrix of the form (6), obtained when replacing the off-diagonal entries \(\{m_{i+1,1},\ldots , m_{n+1,n+1-i}\} \) and \(\{{\tilde{m}}_{i+1,1},\ldots , {\tilde{m}}_{n+1,n+1-i}\} \) by \(\{-m_{i+1,i},\ldots ,-m_{n+1,i} \}\) and \(\{-\tilde{m}_{i+1,i},\ldots ,- \tilde{m}_{n+1,i}\}\), respectively.
Remark 3
If a matrix A is nonsingular and TP, then \(A^{T}\) is also a nonsingular and TP matrix. Furthermore, by Theorem 1,
where \({F}_i\) and \({G}_i\), is the lower and upper bidiagonal matrix in (6). Consequently, if A is symmetric and (7) is satisfied, taking into account the unicity of the factorization, we immediately deduce that \(G_i=F_i^{T}\), \(i=1,\ldots ,n\), and then we have
where \(F_i\) is the lower bidiagonal matrix in (6), whose off-diagonal entries coincide with the multipliers of the NE process of A and D is the diagonal matrix with the pivots.
We say that a real x is computed with high relative accuracy (HRA) whenever the computed value \({\tilde{x}}\) satisfies
where u is the unit round-off and \(K>0\) is a constant independent of the arithmetic precision. Clearly, HRA implies that the relative errors in the computations have the same order as the machine precision. It is well known that a sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, that is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [4, 10]).
If the bidiagonal factorization (5) of a nonsingular and TP matrix A can be computed with HRA then, the computation of its eigenvalues and singular values, the computation of \(A^{-1}\) and even the resolution of \(Ax=b\) for vectors b with alternating signs can be also computed with HRA using the algorithms provided in [11].
Finally, let \((p_{0} ,\ldots ,p_{n})\) be the monomial basis
of the space \({{\mathbf {P}}}^n\) of polynomials of degree less than or equal to n.
The following result will be used later and restates Corollary 1 of [15] providing the bidiagonal factorization (5) of \(W(p_{0},\ldots ,p_{n})(x)\), \(x\in {{\mathbb {R}}}\).
Proposition 1
Let \((p_{0},\ldots ,p_{n})\) be the monomial basis given in (11). For any \(x\in {\mathbb {R}}\), the Wronskian matrix \(W(p_{0},\ldots ,p_{n})(x)\) is nonsingular and can be factorized as follows,
where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the upper triangular bidiagonal matrices in (6) with
Moreover, if \(x>0\) then \(W(p_0,\ldots ,p_n )(x)\) is nonsingular and TP, its bidiagonal decomposition (5) is given by (12) and (13) and it can be computed with HRA.
In [15], using this result, accurate computations with Wronskian matrices of monomial bases are achieved.
In the sequel we shall consider Gram and Wronskian matrices of geometric and Poisson bases. We will now deduce their bidiagonal decomposition (5) and see that algebraic computations with HRA can be achieved for all considered cases.
3 Total positivity and accurate computations with Gram matrices of geometric bases
The geometric distribution has many applications in population and econometric models. Let us recall that the probability of k failures up to obtain a success is given by
where the probability of success is \(t \in [0,1]\). Then, for a given \(n\in {\mathbb {N}}\), we can define an \((n+1)\)-dimensional polynomial basis \((g^n_0,\ldots , g_n^n)\), where
In Sect. 4 of [14], it is proved that the collocation matrix at positive values \(0<x_1<\cdots<x_{n+1}<1\) of \((g^n_0,\ldots , g_n^n)\) is STP. Furthermore, the bidiagonal factorization (5) of the collocation matrix \((g_{j-1}^n(x_{i-1} ))_{1\le i,j\le n+1}\) is deduced by taking into account that each basis function \(g_k^n(x)\) can be obtained by scaling the polynomials \((1-x)^k\) with the positive function \(\varphi (x)=x\), \(k=0,\ldots ,n\). Using this factorization, HRA algebraic computations with collocation matrices of geometric bases \((g^n_0,\ldots , g_n^n)\) are achieved.
The geometric basis \((g^n_0,\ldots , g_n^n)\) given in (14) belongs to the vector space \(L_2[0,1]\) of square integrable functions, which is a Hilbert space under the inner product
The Gram matrix of \((g^n_0,\ldots , g_n^n)\) is the symmetric matrix \(G =(G_{i,j} )_{1\le i.j\le n+1}\), with
The following result proves that G is STP, providing the multipliers and the diagonal pivots of the NE of G.
Theorem 2
The \((n+1)\times (n+1)\) Gram matrix G described by (16) is STP. Moreover, G admits a factorization of the form (5) such that
where \(F_{i,n}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the lower and upper triangular bidiagonal matrices given by (6) and \(D=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \). The entries \(m_{i,j}, \widetilde{m}_{i,j}\) and \(p_{i,i}\) are given by
Proof
Let \(G^{(1)}:=G\) and \(G^{(k)} =(G_{ij}^{(k)})_{1\le i,j \le n+1}\), \(k=2,\ldots , n+1\), be the computed matrices after \(k-1\) steps of the NE of G. Using induction on k, let us first see that
For \(k=1\), \(G_{i,j}^{(1)}= \frac{ 1 }{ 3 {i+j+1 \atopwithdelims ()3} } \) and (20) holds. Let us now suppose that (20) holds for some \(k\in \{1,\ldots ,n-1 \}\). Then, it can be easily checked that
Since \(G_{i,j}^{(k+1)}=G_{i,j}^{(k)}- \frac{G_{i,k}^{(k)}}{G_{i-1,k}^{(k)}}G_{i-1,j}^{(k)}\), we have
Finally, taking into account the following equalities
we can write
and formula (20) also holds for \(k+1\).
Now, by (3) and (20), we can easily deduce that the pivots \(p_{i,j}\) of the NE of G satisfy
and, for the particular case \(i=j\),
Clearly, for \(i=1\) in (24) we obtain \(p_{1,1}=1/3\). Moreover, it can be checked that
and (19) holds.
Now, let us observe that, by formula (24), the pivots of the NE of G are positive and so, this elimination can be performed without row exchanges.
Finally, using (4) and (23), the multipliers \(m_{i,j}\) can be written as
Taking into account that G is a symmetric matrix, we conclude that \({\widetilde{m}}_{i,j}=m_{i,j}\), \(1\le j<i\le n+1\) and (18) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).\(\square \)
As a direct consequence of Theorem 2 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the geometric basis \((g^n_0,\ldots , g_n^n)\) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following result.
Corollary 1
The bidiagonal factorization (5) of the Gram matrix G described by (16), corresponding to the geometric basis \((g^n_0,\ldots , g_n^n)\) in (14), can be computed with HRA. Consequently, the computation of the eigenvalues and singular values of G, the computation of the matrix \(G^{-1}\), as well as the resolution of linear systems \(Gc = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.
Section 7 will show the accurate results obtained when solving algebraic problems with Gram matrices of geometric bases, using the bidiagonal factorization (5) provided by Theorem 2 and the functions in the library TNTool of [12].
4 Bidiagonal factorization of Wronskian matrices of geometric bases
In this section we are going to deduce a bidiagonal decomposition of the form (5) of the Wronskian matrix of geometric bases (14). We shall see that, using this factorization and the algorithms in [12], many algebraic problems related to these matrices can be solved with HRA.
Let us start with some auxiliary results.
Lemma 1
For a given \(t \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), let \(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that
Then, \(U_n:=U_{1,n}\cdots U_{n,n}\), is an upper triangular matrix and
Proof
Since the matrix \(U_n\) is the product of upper triangular bidiagonal matrices, we deduce that \(U_n\) is upper triangular. Now, using Proposition 1, we can deduce that
where \(p_{j}(t):=t^j\), \(j=0,\ldots ,n\). Finally, taking into account that
equalities (26) are immediately obtained. \(\square \)
Theorem 3
Let \((g_0^n,\ldots , g_n^n)\) be the \((n+1)\)-dimensional geometric basis defined in (14). The Wronskian matrix \(W:=W( g_0,\ldots ,g_n)(x)\) at a given \(x\in {\mathbb {R}}\), \(x\ne 0\), admits a factorization of the form
where \(L_{1}=(l_{i,j}^{(1,n)})_{1\le j, i \le n+1}\) is the lower triangular bidiagonal matrix with unit diagonal entries, such that
\(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the upper triangular bidiagonal matrices with unit diagonal entries, such that
and \(D_n\) is the diagonal matrix \(D_n=\mathrm{diag}\left( d_{1}^{(n )}, \ldots ,d_{n+1}^{(n )}\right) \) with
Proof
First, let us observe that \(L_1D_n =(\widetilde{l}_{i,j}^{(n)})_{1\le i,j\le n+1}\) is the lower triangular bidiagonal matrix such that
On the other hand, using Lemma 1 with \(t:= 1-x \), we derive that \(U_n:=U_{1,n}\cdots U_{n-1,n}U_{n,n}\) is the \((n+1)\times (n+1)\) upper triangular matrix described by
In order to prove the result, taking into account (27), (31) and (32), we have to check that
for \(1\le i,j \le n+1\). Since
equalities (33) readily follow for \(1\le j\le i\) and \(i=1,\ldots ,n+1\). For \(j>i\), (33) can be proved by induction on i. If \(i=1\), we clearly have \( (-1)^0 (1-1)! {j-1\atopwithdelims ()0 } (1-x)^{j-1} jx / j = g_{j-1}(x)\), for \(j=2,\ldots ,n+1\). Now, let us suppose that (33) holds for \(i>1\) and \(j> i\). Then, we can write
Using (34), since \((i-1)!{j-1\atopwithdelims ()i-1} =i! {j-1\atopwithdelims ()i}\frac{1}{j-i}\), we have
for \(j=i+1,\ldots ,n\), and consequently (33) follows. \(\square \)
Example 1
Let us illustrate the bidiagonal factorization (27) of \( W( g_0,\ldots ,g_n)(x)\) with a simple example. For \(n=2\), the bidiagonal factorization (27) of the Wronskian matrix of the geometric basis \((x, x(1-x) ,x(1-x)^2)\) at \(x\in {\mathbb {R}}\) is
Let us observe that, from (8) and Theorem 3, the bidiagonal factorization (27) of \( W:=W( g_0,\ldots ,g_n)(x)\) can be represented by means of the \((n+1)\times (n+1)\) matrix \(BD(W)=(BD(W)_{i,j})_{1\le i,j\le n+1}\) such that
Analyzing the sign of the entries in (35), from Theorem 1, we deduce that the matrix \(W( g_0,\ldots ,g_n)(x)\) is not TP for any \(x\in {\mathbb {R}}\). However, the following result shows that, using the bidiagonal decomposition (27), the solution of several algebraic problems related to these matrices can be obtained with HRA .
Corollary 2
Let \(W:=W(g_0,\ldots ,g_n)(x)\) be the Wronskian matrix of the geometric basis defined in (14) and J the diagonal matrix \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Then, for \(x\ge 1\),
is a TP matrix and its bidiagonal factorization (5) can be computed with HRA. Consequently, the computation of the singular values of W, of the matrix \( W^{-1}\) and the resolution of linear systems \( W c = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.
Proof
Using (27) and the fact that \(J^2\) is the identity matrix, we can write
which gives the bidiagonal factorization (5) of \(W_J\). Now, it can be easily checked that if \( x-1\ge 0\), the bidiagonal matrices \(L_1\), \(JU_{i,n}J\), \(i=1,\ldots ,n\), as well as the diagonal matrix \(D_nJ\) are TP. By Theorem 1, \(W_J\) is TP for \(x\ge 1\) and then the computation of its bidiagonal decomposition (5), the computation of its eigenvalues and singular values, the inverse matrix \(W_{J}^{-1}\) and the resolution of \(W_{J}c= b\), where \(b = (b_0, \ldots , b_n)^T\) has alternating signs can be performed with HRA (see Section 3 of [11]).
On the other hand, since J is a unitary matrix, the singular values of \(W_{J} \) coincide with those of W and so, their computation for \(x\ge 1\) can be performed with HRA. Similarly, taking into account that
we can compute \(W ^{-1}\) accurately. Finally, if we have a linear system of equations \(W c = b\), where the elements of \(b = (b_1, \ldots , b_n)^T\) have alternating signs, we can solve with HRA the system \(W_{J}d = b\) and then obtain \(c=Jd\).\(\square \)
Section 7 will show that the resolution of algebraic problems with \(W_J\), and consequently with W, can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem.
5 Total positivity and factorizations of Gram matrices of Poisson bases
The Poisson distribution is useful when modeling the number of times an event occurs in an interval of time or space. An event can occur \(k=0, 1, 2,\ldots \) times in an interval. If the rate parameter, that is, the average number of events in an interval is designated by t, then the probability of observing k events in an interval is given by
The Poisson basis functions
are the limit as n tends to infinity of the Bernstein basis of degree n over the interval [0, n], that is,
and they also play a useful role in CAGD (see [20]).
In Sect. 4 of [14], it is proved that the collocation matrix at positive values \(x_1<\cdots <x_{n+1}\) of a basis of Poisson functions \((P_0,\ldots ,P_n)\) is STP on \((0,\infty )\). Furthermore, the bidiagonal factorization of the collocation matrix \((P_{j-1}(x_{i-1} ))_{1\le i,j\le n+1}\) is deduced by taking into account that each basis function \(P_k(x)\) can be obtained by scaling the polynomials \(x^k/k!\) with the positive function \(\varphi (x)=e^{-x}\), \(k=0,\ldots ,n\). Using this factorization, accurate algebraic computations with collocation matrices of Poisson bases are achieved.
The Poisson basis functions belong to the vector space \(L_2 (0, \infty )\) of square integrable functions, which is a Hilbert space under the inner product
The Gram matrix of the Poisson basis \((e^{-x}, xe^{ - x},\ldots , x^ne^{-x}/n!)\) is a symmetric matrix \(G =\left( g_{i,j}\right) _{1\le i,j\le n+1}\) with
In the following result, the multipliers and the diagonal pivots of the NE of the Gram matrix G described in (40) are provided. Furthermore, it is proved that G is a STP matrix.
Theorem 4
The \((n+1)\times (n+1)\) Gram matrix G described by (40) is STP. Moreover, G admits a factorization of the form (5) such that
where \(F_{i,n}\) and \(G_{i,n}\), \(i=1,\ldots ,n\), are the lower and upper triangular bidiagonal matrices given by (6) and \(D_n=\mathrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \). The entries \(m_{i,j}, \widetilde{m}_{i,j}\) and \(p_{i,i}\) are given by
Proof
Let \(G^{(1)}:=G\) and \(G ^{(k)}:=(g_{ij}^{(k)})_{1\le i,j \le n+1}\), \(k=2,\ldots , n+1\), be the matrices obtained after \(k-1\) steps of the NE process of the Gram matrix G. Let us see, by induction on k, that
For \(k=1\), we have \(g_{i,j}^{(1)}= \frac{(i+j-2)! }{ 2 ^{ i+j-1} (i-1)! (j-1)! }\) and (44) holds. Let us now suppose that (44) holds for some \(k\in \{1,\ldots ,n-1 \}\). Then it can be easily checked that
Since \(g_{i,j}^{(k+1)}=g_{i,j}^{(k)}-\frac{g_{i,k}^{(k)}}{g_{i-1,k}^{(k)}}g_{i-1,j}^{(k)}\), we have
for \( 1\le j,i\le n+1\) and formula (44) also holds for \(k+1\).
Now, by (3) and (44), we can easily deduce that the pivots \(p_{i,j}\) of the NE of G satisfy
and, for the particular case \(i=j\), \(p_{i,i}= { 1 }/{2 ^{2i-1}}\) for \(i=1,\ldots ,n\). Then, \(p_{1,1}=1/2\) and \( p_{i+1,i+1}/p_{i,i}=1/4 \) and so, (43) holds. By formula (45), \(p_{i,j}>0\) and so, the NE can be performed without row exchanges.
Finally, using (4) and (45), the multipliers \(m_{i,j}\) of the NE of G can be written as
Taking into account that G is a symmetric matrix, we conclude that \({\widetilde{m}}_{i,j}=m_{i,j}\), \(1\le j<i\le n+1\) and (42) holds. Clearly, the pivots and multipliers of the NE of G are positive and so, G is STP (see Remark 1).\(\square \)
As a direct consequence of Theorem 4 we can deduce that the bidiagonal factorization (5) of the Gram matrix of the Poisson basis \((e^{-x}, xe^{- x},\ldots , x^ne^{-x}/n!)\) can be computed with HRA. Furthermore, taking into account Remark 2, its inverse matrix can also be computed with HRA. We state these important properties in the following corollary.
Corollary 3
The bidiagonal factorization (5) of the Gram matrix G described by (40), corresponding to the Poison basis \((e^{-x}, xe^{- x},\ldots , x^ne^{-x}/n!)\) can be computed with HRA. Consequently, the computation of the eigenvalues and singular values of G, the computation of the matrix \(G^{-1}\), as well as the resolution of linear systems \(Gc = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have alternating signs, can be performed with HRA.
Section 7 illustrates the accurate results obtained when solving algebraic problems with the Gram matrix G, using the bidiagonal factorization (5) provided by Theorem 4 and the functions in the library TNTool of [12].
6 Bidiagonal factorization of Wronskian matrices of Poisson bases
In this section we are going to consider Wronskian matrices of Poisson bases and deduce their bidiagonal factorization (5). Using this factorization and the algorithms in [12], many algebraic problems related to these matrices will be solved with a great accuracy.
Let us start with some auxiliary results.
Lemma 2
Let \(n\in {\mathbb {N}}\) and \(L_{k,n}=(l_{i,j}^{(k,n)})_{1 \le i, j \le n+1}\), \(k=1,\ldots ,n\), the lower triangular bidiagonal matrix with unit diagonal entries, such that
Then \(L_n:=L_{n,n}L_{n-1,n}\cdots L_{1,n}\) is a lower triangular bidiagonal matrix with
Proof
Clearly, \(L_n\) is a lower triangular matrix since it is the product of lower triangular bidiagonal matrices. Now let us prove (47) by induction on n. For \(n=1\),
and (47) clearly holds. Let us now suppose that (47) holds for \(n\ge 1\). Then
where \({\tilde{L}}_{n+1}:= L_{n+1,n+1} L_{n,n+1}\cdots L_{2,n+1}\) satisfies \({\tilde{L}}_{n+1}=( \tilde{l}_{i,j}^{(n+1)})_{1\le i,j\le n+2}\) with \( \tilde{l}_{i,1}=\delta _{i,1}\), \( \tilde{l}_{1,i}=\delta _{1,i}\), \(i=1,\ldots ,n+2\), and the submatrix of \({\tilde{L}}_{n+1}\) containing rows and columns of places \(\{2,\ldots ,n+2\} \), denoted by \({\tilde{L}}_{n+1}[2,\ldots , n+2]\), satisfies \({\tilde{L}}_{n+1}[2,\ldots , n+2]= L_{n,n} L_{n-1,n}\cdots L_{1,n}\).
Then we have that
Now, taking into account that
and the fact that \({ i-2\atopwithdelims ()j-2}+{ i-2\atopwithdelims ()j-1}={ i-1\atopwithdelims ()j-1}\), we deduce that \(L_{n+1}=(l_{i,j}^{(n+1)})_{1\le i,j\le n+2}\) satisfies
for \(1\le j\le i\le n+2\). \(\square \)
Lemma 3
For a given \(t \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), let \(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that
Then, \(U_n:=U_{1,n}\cdots U_{n,n}\), is an upper triangular matrix and
Proof
First, let us observe that given \({\widetilde{M}}_n=({\widetilde{m}}_{i,j} )_{1\le i,j\le n+1}\) and a nonsingular diagonal matrix \(D_n=\mathrm{diag}\left( d_{1} ,\ldots ,d_{n+1} \right) \),
where \(M_n=(m_{i,j} )_{1\le i,j\le n+1}\) satisfies \(m_{i,j} = {\widetilde{m}}_{i,j} d_{i }/d_{j} \) for \(i,j=1,\ldots ,n+1\). Now, let us define \(D_n:=\mathrm{diag}\left( d_i \right) _{1\le i \le n+1}\), such that \(d_i= (i-1)!\), \(i=1,\ldots ,n+1\), and let \({\widetilde{U}}_{k,n}=({\widetilde{u}}_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1, \ldots , n\), be the \((n+1)\times (n+1)\), upper triangular bidiagonal matrix with unit diagonal entries, such that
Taking into account the fact that \( d_{i-1} /d_{i} = 1/(i-1) \), \(i=2,\ldots ,n+1\), we can write
Consequently, \( U_{1,n} \cdots U_{n,n}= D_{n} {\widetilde{U}}_{1,n} \cdots {\widetilde{U}}_{n,n} D_{n}^{-1}\) and, applying Lemma 1 to \({\widetilde{U}}_{1,n} \cdots {\widetilde{U}}_{n,n}\),
\(\square \)
Now, we can deduce a bidiagonal factorization of Wronskian matrices of Poisson bases.
Theorem 5
Let \(n\in {\mathbb {N}}\) and \((P_0,\ldots ,P_n)\) the basis (38) of Poisson functions. For a given \(x\in {\mathbb {R}}\), \(W:=W(P_0,\ldots ,P_n)(x)\) admits a factorization of the form
where \(L_{k,n}=(l_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the lower triangular bidiagonal matrices with unit diagonal entries, such that
\(U_{k,n}=(u_{i,j}^{(k,n)})_{1\le j, i \le n+1}\), \(k=1,\ldots ,n\), are the upper triangular bidiagonal matrices with unit diagonal entries, such that
and \(D_n\) is the diagonal matrix \(D_n=\mathrm{diag}\left( d_{1}^{(n)}, \ldots ,d_{n+1}^{(n)}\right) \) with \(d_i^{(n)}=e^{-x}\), \(i=1,\ldots ,n+1\).
Proof
By Lemma 2, \(L_n := L_ {n,n}L_{n-1,n}\cdots L_{1,n} \) is a lower triangular matrix and satisfies
On the other hand, by Lemma 3, \(U_n:=U_{1,n} \cdots U_{n-1,n} U_{n,n}\) satisfies
Now, let us see that \(W=L_n D_n U_n\), that is,
We shall prove (49) by induction on i. For \(i=1\),
and (49) holds. Now, let us assume that (49) holds for \(i\ge 1\). For any j such that \(1\le j\le i \), it can be checked that
where
In a similar way it can be checked that, for any \(j >i\), we have
where
Therefore,
and (49) holds. \(\square \)
Example 2
Let us illustrate the bidiagonal factorization (48) of \(W(P_0,\ldots ,P_n)(x)\) with a simple example. For \(n=2\), the bidiagonal factorization of the Wronskian matrix of the basis \((e^{-x},xe^{-x},\frac{x^2}{2}e^{-x})\) at \(x\in {\mathbb {R}}\) is
Using (8) and Theorem 5, the bidiagonal factorization (48) of \(W:=W(P_0,\ldots ,P_n)(x)\) can be represented by means of the matrix \(BD(W)=(BD(W)_{i,j})_{1\le i,j\le n+1}\) such that
Analyzing the sign of the entries in (50), we can deduce from Theorem 1 that the Wronskian matrix of the Poisson basis is not TP for any \(x\in {\mathbb {R}}\). However, the following result shows that the solution of several algebraic problems related to these matrices can be obtained with HRA by using the bidiagonal decomposition (48).
Corollary 4
Let \(W:= W( P_ 0,\ldots , P_ n)(x)\) be the Wronskian matrix of the Poisson basis defined in (38) and J the diagonal matrix \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Then, for any \(x<0\),
is an STP matrix and, for \(x=0\), \(W_J\) is TP. If, in addition, we know \(e^{-x}\) with HRA, then the bidiagonal factorization (5) of \(W_J\) can be computed with HRA. Consequently, the computation of the eigenvalues, singular values of W, the computation of the matrix \(W^{-1}\), as well as the resolution of linear systems \(Wc= b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have the same sign, can be performed with HRA.
Proof
Using Theorem 5 and the fact that \(J^2\) is the identity matrix, by (48) we can write
which gives its bidiagonal factorization (5). Now, it can be easily checked that the multipliers and diagonal pivots of the bidiagonal factorization (52) of \(W_J\) are positive if \(x<0\) and nonnegative for \(x=0\). Therefore, by Theorem 1, \(W_J\) is TP for \(x=0\) and, by Remark 1, \(W_J\) is STP for any \(x<0\), and its bidiagonal decomposition (52) can be computed with HRA for \(x\le 0\). This fact guarantees the computation with HRA of the eigenvalues and singular values of \(W_J\), the inverse matrix \(W_J^{-1}\) and the solution of the linear systems \(W_J c= d\), where \(d = (d_1, \ldots , d_{n+1})^T\) has alternating signs (see Section 3 of [11]).
Let us observe that, since J is a unitary matrix, the eigenvalues and singular values of W coincide with those of \(W_J\) and therefore, using the bidiagonal decomposition (52) of \(W_J\), their computation for \(x<0\) can be performed with HRA.
For the accurate computation of \(W^{-1}\), we can take into account that
Since, for \(x<0\), \(W_J^{-1}=({\tilde{w}}_{i,j})_{1\le i,j\le +1}\) can be computed with HRA and, by (53), the inverse of the Wronskian matrix W satisfies \(W^{-1}=((-1)^{i+j}{\tilde{w}}_{i,j})_{1\le i,j\le +1}\), we can also accurately compute \(W^{-1}\) by means of a suitable change of sign of the accurate computed entries of \(W_J^{-1}\).
Finally, if we have a linear system of equations \(W c = b\), where the elements of \(b = (b_1, \ldots , b_{n+1})^T\) have the same sign, we can compute with HRA the solution \(d\in {\mathbb {R}}^{n+1}\) of \(W_J d= Jb\) and, consequently, the solution \(c\in {\mathbb {R}}^{n+1}\) of the initial system since \(c=Jd\). \(\square \)
Observe that Corollary 4 requires the evaluation with HRA of \(e^{-x}\). Even when this condition does not hold, Sect. 7 will show that the resolution of algebraic problems with \(W_J\) can be performed through the proposed bidiagonal factorization with an accuracy independent of the conditioning or the size of the problem and so, similar to HRA. Consequently, as in the proof of Corollary 4, we can deduce that the computation of the eigenvalues, singular values of W, the matrix \(W^{-1}\), as well as the solution of linear systems \(Wx = b\), where the entries of \(b = (b_1, \ldots , b_{n+1})^T\) have the same signs, can be performed with an accuracy similar to HRA.
7 Numerical experiments
Let us suppose that A is an \((n+1)\times (n+1)\) nonsingular, TP matrix whose bidiagonal decomposition (5) is represented by means of the matrix BD(A) (see (8)). If BD(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTools in [12] take as input argument BD(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix \(A^{-1}\) and the solution of systems of linear equations \(Ax= b\), for vectors b whose entries have alternating signs. The computational cost of the function TNSolve is \(O(n^2)\) elementary operations. On the other hand, as it can be checked in page 303 of reference [19], the function TNInverseExpand has a computational cost of \(O(n^2)\) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (\(O(n^3)\)). The computational cost of the other mentioned functions is \(O(n^3)\).
For the geometric basis \((g^n_0,\ldots , g_n^n)\), \(n\in {\mathbb {N}}\), using Theorem 2, we have implemented a Matlab function that computes BD(G) for its Gram matrix G, described in (16). Furthermore, using Theorem 3 and Corollary 2, we have implemented a Matlab function for the computation of \(BD(W_J)\), where \(W_{J}\) is the scaled Wronskian matrix at \(x\ge 1\), \(W_{J} := W J\) given in (36).
For the Poisson basis \((P_0,\ldots ,P_n)\), \(n\in {\mathbb {N}}\), considering Theorem 4, we have also implemented a Matlab function, which computes BD(G) for its Gram matrix G described in (40). At last, using Theorem 5 and Corollary 4, we have implemented a Matlab function for the computation of \(BD(W_{J})\) for the matrix \(W_{J} := JWJ\), obtained from its Wronskian matrix W at \(x\le 0\) (see (51)).
Taking as argument the computed matrix representation of the corresponding bidiagonal decomposition (5), we have solved several algebraic problems with the Matlab functions of the library TNTools in [12]: TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve. Additionally, their solution has also been computed by using the Matlab commands eig, svd, inv and the Matlab command \(\setminus \), respectively. To analyze the accuracy of the approximations, all the solutions have been also calculated in Mathematica using a 100-digit arithmetic. The values provided by Mathematica have been considered as the exact solution of the considered algebraic problem in order to compute the relative errors.
Observe that, in all cases, the computational complexity in the computation of the entries \(m_{i,j}\), \({\tilde{m}}_{i,j}\), \(1\le j<i\le n+1\), is \(O(n^2)\) and in the computation of \(p_{i,i}\), \(1\le i\le n+1\), is O(n).
In the numerical experimentation, we have considered Gram and Wronskian matrices corresponding to different \((n+1)\)-dimensional geometric and Poisson bases with dimensions \(n+1=5, 10, 15, 20\). In order to analyze the accuracy in the computations with the Wronskian matrices, we have considered several \(x\ge 1\) for geometric bases (see Corollary 2) and several \(x< 0\) for Poisson bases (see Corollary 4). We shall illustrate the obtained results with two particular cases: Wronskian matrices of the geometric bases at \(x=10\) and Wronskian matrices of Poisson bases at \(x=-40\). The authors will provide upon request the software with the implementation of the above mentioned routines.
The 2-norm condition number of the considered Gram and Wronskian matrices has been obtained by means of the Mathematica command Norm[A,2]\(\cdot \) Norm[Inverse[A],2] and it is shown in Tables 1 and 2, respectively. We can clearly observe that the condition numbers significantly increase with the dimension of the matrices. This fact explains that traditional methods do not obtain accurate solutions when solving the aforementioned algebraic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [12].
Now, let us notice that Gram matrices of geometric and Poisson bases are STP and then, by Theorem 6.2 of [1], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.
The eigenvalues and singular values of the considered Gram and Wronskian matrices have been computed with the Matlab functions TNEigenValues and TNSingularValues, respectively, taking as argument the matrix representation (8) of the corresponding deduced bidiagonal decomposition (5). Additionally, they have also been obtained by using the Matlab commands eig and svd, respectively. The relative error e of each approximation have been computed as \(e:=|a-\tilde{a} |/|a|\), where a denotes the eigenvalue or singular value computed with Mathematica and \(\tilde{a}\) the eigenvalue or singular value computed with the Matlab functions.
In Tables 3, 4, 5 and 6 the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our method provides very accurate results in contrast to the not accurate results provided by the Matlab commands eig and svd.
On the other hand, two approximations to the inverse matrix of the considered Gram and Wronskian matrices have also been calculated with Matlab. To look over the errors we have compared both Matlab approximations \(\widetilde{A} ^{-1}\) with the inverse matrix \(A ^{-1}\) computed by Mathematica, taking into account the formula \(e=\Vert A ^{-1}-\widetilde{A} ^{-1} \Vert _2/\Vert A ^{-1}\Vert _2\) for the corresponding relative errors. The obtained relative errors are shown in Tables 7 and 8. Observe that the relative errors achieved through the bidiagonal decompositions obtained in this paper are much smaller than those obtained with the Matlab command inv.
At last, given random nonnegative integer values \(d_i\), \(i=1,\ldots ,n+1\), we have considered linear systems \( G c = d \) with \( d =((-1)^{i+1}d_i)_{1\le i\le n+1}\), and linear systems \({W c }={ d }\) where, in the case of geometric bases, \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and, in the case of Poisson bases, \({ d }=(d_i)_{1\le i\le n+1}\). We have computed in Matlab two approximations of the vector solution. An approximation has been computed by using the proposed bidiagonal decomposition (5) with the function TNSolve, and the other using the Matlab command \(\setminus \). Then, we have computed in Mathematica the relative error of the computed approximation \(\tilde{c}\), taking into account the formula \(e=\Vert c-{\tilde{c}}\Vert _2/ \Vert c\Vert _2\) and considering the vector c provided by Mathematica as the exact solution.
In Tables 9 and 10, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed method preserves the accuracy, which does not considerably increases with the dimension of the system in contrast with the results obtained with the Matlab command \(\setminus \).
References
Ando, T.: Totally positive matrices. Linear Algebra Appl. 90, 165–219 (1987)
Delgado, J., Peña, J.M.: Accurate computations with collocation matrices of rational bases. Appl. Math. Comput. 219, 4354–4364 (2013)
Delgado, J., Peña, J.M.: Accurate computations with collocation matrices of q-Bernstein polynomials. SIAM J. Matrix Anal. Appl. 36, 880–893 (2015)
Demmel, J., Koev, P.: The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 27, 42–52 (2005)
Fallat, S.M., Johnson, C.R.: Totally Nonnegative Matrices, Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2011)
Gasca, M., Peña, J.M.: Total positivity and Neville elimination. Linear Algebra Appl. 165, 25–44 (1992)
Gasca, M., Peña, J.M.: A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 202, 33–53 (1994)
Gasca, M., Peña, J.M.: On factorizations of totally positive matrices. In: Gasca, M., Micchelli, C.A. (eds.) Total Positivity and its Applications, pp. 109–130. Kluver Academic Publishers, Dordrecht (1996)
Goldman, R.: The rational Bernstein bases and the multirational blossoms. Comput. Aided Geom. Des. 16, 701–738 (1999)
Koev, P.: Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 27, 1–23 (2005)
Koev, P.: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29, 731–751 (2007)
Koev, P. Available online: http://www.math.mit.edu/plamen/software/TNTool.html. Accessed 16 May 2021
Mainar, E., Peña, J.M.: Accurate computations with collocation matrices of a general class of bases. Numer. Linear Algebra Appl. 25, e2184 (2018)
Mainar, E., Peña, J.M., Rubio, B.: Accurate bidiagonal decomposition of collocation matrices of weighted \(\varphi \)-transformed systems. Numer. Linear Algebra Appl. 27, e2295 (2020)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with Wronskian matrices. Calcolo 58, 1 (2021)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with collocation and Wronskian matrices of Jacobi polynomials. J. Sci. Comput. 87, 77 (2021)
Marco, A., Martínez, J.J.: A fast and accurate algorithm for solving Bernstein–Vandermonde linear systems. Linear Algebra Appl. 422, 616–628 (2007)
Marco, A., Martínez, J.J.: Accurate computations with Said–Ball–Vandermonde matrices. Linear Algebra Appl. 432, 2894–2908 (2010)
Marco, A., Martínez, J.J.: Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 350, 299–308 (2019)
Morin, G., Goldman, R.: A subdivision scheme for Poisson curves and surfaces. Comput. Aided Geom. Des. 17, 813–833 (2000)
Pinkus, A.: Totally Positive Matrices, Cambridge Tracts in Mathematics, 181. Cambridge University Press, Cambridge (2010)
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was partially supported through the Spanish research Grant PGC2018-096321-B-I00 (MCIU/AEI) and by Gobierno de Aragón (E41_20R).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mainar, E., Peña, J.M. & Rubio, B. Accurate computations with Gram and Wronskian matrices of geometric and Poisson bases. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat. 116, 126 (2022). https://doi.org/10.1007/s13398-022-01253-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13398-022-01253-1
Keywords
- Accurate computations
- Gram matrices
- Wronskian matrices
- Bidiagonal decompositions
- Geometric bases
- Poisson bases