Total positivity and accurate computations with Gram matrices of Bernstein bases

In this paper, an accurate method to construct the bidiagonal factorization of Gram (mass) matrices of Bernstein bases of positive and negative degree is obtained and used to compute with high relative accuracy their eigenvalues, singular values and inverses. Numerical examples are included.

). They are also helpful in stochastic dynamics (cf. [24]), in optimal control theory applications (cf. [34]), as well as in the modelling of chemical reactions (cf. [7]). Additionally, Bernstein polynomials play a crucial role in approximation theory since they make possible to prove the Weierstrass approximation theorem (see [4]).
Bernstein bases of negative degree were introduced in [19]. They share many properties with the polynomial Bernstein bases, such as they form a partition of unity, satisfy Descartes' Law of Sign, and possess recurrence relations as well as two-term formulae for degree elevation and differentiation. What is more, negative degree Bernstein bases are totally positive on their natural domain as well (see [30]). This kind of bases, by contrast with polynomial Bernstein bases, are able to embody arbitrary functions that are analytic in a neighborhood of zero and uniformly approximate all continuous functions which vanish at minus infinity.
Despite the nice properties of Bernstein bases, they are not orthogonal. In the polynomial case, to overcome this inconvenience, the Bernstein polynomial basis is often transformed into an orthogonal polynomial basis by means of a transformation matrix, which is a Gram (or mass) matrix. The inversion of Bernstein mass matrices for the resolution of the linear system of normal equations is required when approximating, in the least-squares sense, curves by Bézier curves, that is, by linear combinations of control points and Bernstein polynomials (see [3] and [27][28][29]). On the other hand, many degree reduction methods also require an inversion of these matrices, and they will become more efficient if the matrix inverses are explicitly expressed. Several approaches for the inversion of mass matrices can be found in [1].
Unfortunately, as the degree of the polynomial basis increases, the transformation matrix become ill-conditioned. Consequently, computations with high relative accuracy (HRA) using Bernstein mass matrices is an important issue in numerical linear algebra. The accurate computation with structured classes of matrices is receiving increasing attention in the recent years (cf. [9][10][11]31]). For totally positive matrices, it usually requires the explicit computation of a bidiagonal factorization. In fact, if we achieve the computation of this factorization with HRA, we can apply the algorithms presented in [21][22][23] to solve with HRA algebraic problems such as the computation of the matrix inverse, the computation of the eigenvalues and singular values, or the resolution of some systems of linear equations associated to the matrix.
In this paper Gram matrices of Bernstein bases of positive and negative degree are considered. It is proved that these matrices are strictly totally positive and, using a Neville elimination procedure (see [16][17][18]), a bidiagonal factorization of them is deduced. By means of the proposed factorization, the resolution of the above mentioned algebraic problems can be achieved with HRA and consequently, the accuracy of the obtained solutions does not considerably decrease with the dimension of the matrix, as happens with the traditional methods.
We now describe the layout of the paper. In Section 2, we provide some basic notations and preliminary results. In Section 3, Gram matrices of polynomial Bernstein bases are considered. It is shown that these matrices are STP and a bidiagonal factorization for the resolution with HRA of related algebraic problems is proposed. Furthermore, the bidiagonal factorization of the principal submatrices of Bernstein mass matrices is also provided. Section 4 proves that the Gram matrices of Bernstein bases of negative degree are also STP and the bidiagonal factorization to derive accurate algorithms is deduced. Finally, Section 5 presents numerical experiments confirming the accuracy of the presented methods for the computation of eigenvalues, singular values, inverses or the solution of some linear systems related to Gram matrices of the considered bases.

Notations and auxiliary results
A matrix is totally positive: TP (respectively, strictly totally positive: STP) if all its minors are nonnegative (respectively, positive). Several applications of these matrices can be found in [2,12,33].
The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [16][17][18]). Given an (n + 1) × (n + 1), nonsingular matrix A = (a i,j ) 1≤i,j ≤n+1 , the NE process calculates a sequence of matrices so that the entries of A (k+1) below the main diagonal in the k first columns, 1 ≤ k ≤ n, are zeros and so, The (i, j ) pivot of the NE process of the matrix A is and, in particular, we say that p i,i is the ith diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure. The (i, j ) multiplier of the NE process of the matrix A is NE is a nice tool to deduce that a given matrix is STP, as shown in this characterization derived from Corollary 5.5 of [16] and the arguments of p. 116 of [8]. By Theorem 4.2 and the arguments of p. 116 of [18], a nonsingular STP matrix A = (a i,j ) 1≤i,j ≤n+1 admits a factorization of the form

Theorem 1 A given matrix A is STP if and only if the Neville elimination of A and
where F i and G i , i = 1, . . . , n, are the TP, lower and upper triangular bidiagonal matrices given by and D = diag p 1,1 , p 2,2 , . . . , p n+1,n+1 has positive diagonal entries. The diagonal entries p i,i of D are the positive diagonal pivots of the Neville elimination of A and the elements m i,j and m i,j are the multipliers of the Neville elimination of A and A T , respectively. If, in addition, the entries m ij , m ij satisfy m ij = 0 ⇒ m hj = 0, ∀ h > i and m ij = 0 ⇒ m ik = 0, ∀ k >j, (7) then the decomposition (5) is unique.
In [21], the bidiagonal factorization (5) of an (n + 1) × (n + 1) nonsingular and TP matrix A is represented by defining a matrix BDA(A) = (BDA(A) i,j ) 1≤i,j ≤n+1 such that Remark 1 Using the results in [16][17][18], given the bidiagonal factorization (5) of a TP matrix A, a bidiagonal decomposition of A −1 can be computed as where F i and G i , is the lower and upper bidiagonal matrix of the form ( Remark 2 Let us also observe that if A is a nonsingular and STP matrix, then A T is also nonsingular and STP. Moreover, the bidiagonal decomposition of A T can be computed as where F i and G i , i = 1, . . . , n, are the lower and upper triangular bidiagonal matrices given in the bidiagonal factorization (5). Furthermore, if the nonsingular and STP matrix A is symmetric then (7) is satisfied and, taking into account the unicity of the factorization, we immediately deduce that G i = F T i , i = 1, . . . , n, and consequently, the bidiagonal decomposition (5) satisfies where the matrices F i , i = 1, . . . , n, are the lower triangular bidiagonal matrices described in (6), whose off-diagonal entries coincide with the positive multipliers of the Neville elimination of A and D is the is the diagonal matrix with the positive pivots of the Neville elimination of A.
Finally, let us recall that a real x is computed with high relative accuracy (HRA) whenever the computed valuex satisfies where u is the unit round-off and K > 0 is a constant, which is independent of the arithmetic precision. Clearly, HRA implies a great accuracy since the relative errors in the computations have the same order as the machine precision. A sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, sometimes denoted as NIC condition, which is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [11,21]). If the bidiagonal factorization (5) of a nonsingular and TP matrix A can be computed with HRA, the computation of its eigenvalues and singular values, the computation of A −1 and even the resolution of systems of linear equations Ax = b, for vectors b with alternating signs, can be also computed with HRA using the algorithms provided in [22].

Total positivity and accurate computations with Gram matrices of polynomial Bernstein bases
The Bernstein basis of the space P n [0, 1] of polynomials of degree less than or equal to n on the interval These basis functions belong to the vector space L 2 [0, 1] of square integrable functions, which is a Hilbert space under the following inner product The corresponding Gram matrix is the symmetric matrix and (x) denotes the well-known Gamma function. Let us recall that the dual Bernstein basis of P n [0, 1] is the polynomial basis for i, j = 0, . . . , n (cf. [25,26,35,36]) and then [29]). That means that the Gram matrix M in (13) Taking into account (13), M r,l can be described as The following result proves that the Gram matrices M r,l with respect to the inner product (12) are STP and provides the multipliers and the diagonal pivots of their Neville elimination. As we shall show, the corresponding bidiagonal factorization (5) will provide accurate computations related to these matrices. We are going to use the following generalization of combinatorial numbers. Given α ∈ R and n ∈ N, Theorem 2 For n ∈ N and integers r, l ≥ 0 with r + l ≤ n, let m := n − r − l. Then, the (m + 1) × (m + 1) Gram matrix M r,l described by (15) is STP. Moreover, M r,l admits a factorization of the form (5) such that where F i and G i , i = 1, . . . , m, are the lower and upper triangular bidiagonal matrices given by (6) and D = diag p 1,1 , . . . , p m+1,m+1 . The off-diagonal entries m i,j and m i,j , 1 ≤ j < i ≤ m + 1 are given by , Moreover, the diagonal entries p i,i , 1 ≤ i ≤ m + 1, satisfy . . . , m + 1, be the matrices obtained after k − 1 steps of the NE process of M r,l . First, let us see by induction on k that for 1 ≤ i, j ≤ m + 1. By (15), and (20) Now, taking into account the following property of the Gamma function it can be easily checked that , Using the generalization (16) of combinatorial numbers, we have the following identities and then, we can write Since i−1,j , using (21) and (22), we have with Finally, from (23), (24) and the equalities and formula (20) also holds for k + 1.
By (3) and (20), we can easily deduce that the pivots p i,j of the NE of M satisfy for 1 ≤ j ≤ i ≤ m + 1. Then, for the particular case i = j , Then, it can be easily checked that and so, (19) holds. By formula (26), we see that the pivots of the NE of M r,l are positive. Finally, using (4) and (26), the multipliers m i,j , 1 ≤ j < i ≤ m + 1, can be written as .
Taking into account that M r,l is a symmetric matrix, we conclude that m i,j = m i,j , 1 ≤ j < i ≤ m + 1. Clearly, the pivots and multipliers of the NE of M r,l are positive and so, M r,l is STP.
Let us notice that, from Theorem 2, the bidiagonal factorization (5) of the Gram matrix M r,l described by (15) can be represented by means of the (m + 1) × (m + 1) matrix BD(M r,l ) = (BD(M r,l ) i,j ) 1≤i,j ≤m+1 such that The following result provides the conditions on α, β so that M r,l and (M r,l ) −1 can be computed with HRA.

Total positivity and accurate computations with Gram matrices of Bernstein bases of negative degree
(see (16)). From formulae (31) and (32), we can easily deduce that the Bernstein basis functions of negative degree are non-negative over the interval (−∞, 0]. Furthermore, they are linearly independent rational functions and form a partition of unity (cf. [19]). These functions belong to the vector space L 2 (−∞, 0) of square integrable functions, which is a Hilbert space under the inner product It can be checked that the corresponding Gram matrix, M = (M i,j ) 1≤i.j ≤n+1 , can be computed exactly as The following result proves that Gram matrices of Bernstein bases of negative degree are STP and provides the multipliers and the diagonal pivots of their NE. The corresponding bidiagonal factorization (5) will provide accurate computations related to these matrices.

Theorem 3
The (n + 1) × (n + 1) Gram matrix M given by (34) is STP. Moreover, M admits a factorization of the form (5) such that where F i and G i , i = 1, . . . , n, are the (n + 1) × (n + 1) lower and upper triangular bidiagonal matrices of the form (6) and D = diag p 1,1 , . . . , p n+1,n+1 . The entries m i,j , m i,j and p i,i are given by  (38) holds. Let us now suppose that (38) holds for some k ∈ {1, . . . , n}. Then, it can be easily checked that i−1,j , taking into account the following equalities we can write where Finally, from (39), (40), and the identities and formula (38) also holds for k + 1. Now, by (3) and (38), we can easily deduce that the pivots p i,j of the NE of M satisfy for 1 ≤ j ≤ i ≤ n + 1. Then, in particular, for i = j we have Then, it can be easily checked that p 1,1 = 1/(2m − 1), and so, (37) readly follows. Now, let us observe that, by formula (42), the pivots of the NE of M are positive and so, this elimination can be performed without row exchanges. Finally, using (4) and (42), the multipliers m i,j can be written as Taking into account that M is a symmetric matrix, we conclude that m i,j = m i,j , 1 ≤ j < i ≤ n + 1. Clearly, the pivots and multipliers of the NE of M are positive and so, M is STP. Now, from Theorem 3, the bidiagonal factorization (5) of the (n + 1) × (n + 1) dimensional Gram matrix M of the Bernstein basis of degree −m can be represented by means of the (n + 1) Let us observe that the bidiagonal factorization (5) of the Gram matrix M can be computed with HRA. Consequently, using (9), its inverse matrix can also be computed with HRA as stated in the following result.

Corollary 2
Let M be the (n + 1) × (n + 1) Gram matrix of the Bernstein basis of degree −m described by (34). Then M and M −1 can be computed with HRA. Section 5 shows accurate results obtained when solving algebraic problems using the proposed bidiagonal factorization and the algorithms presented in [22,23].

Numerical experiments
Let us suposse that A is an (n+1)×(n+1) nonsingular, TP matrix, whose bidiagonal decomposition (5) is represented by means of the matrix BD(A) given in (8). If BD(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTools in [23] take as input argument BD(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix A −1 (using the algorithm presented in [32]) and the solution of systems of linear equations Ax = b, for vectors b whose entries have alternating signs. The computational cost of the function TNSolve is O(n 2 ) elementary operations. On the other hand, as it can be checked in page 303 of reference [32], the function TNInverseExpand has a computational cost of O(n 2 ) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (O(n 3 )). The computational cost of the other mentioned functions is O(n 3 ).
For the polynomial Bernstein basis, (B n 0 , . . . , B n n ), using Theorem 2, we have implemented a Matlab function for computing the matrix BD(M), where M is the Gram matrix described in (29) (see (30)) or its principal submatrix M r,l (see (28)).
For the Bernstein basis of degree −m, (B −m 0 , . . . , B −m n ), considering Theorem 3, we have also implemented a Matlab function, which computes the matrix BD(M)in (45) for the Gram matrix M given in (34).
Observe that, in all cases, the computational complexity in the computation of the entries m i,j ,m i,j , 1 ≤ j < i ≤ n + 1, is O(n 2 ) and in the computation of p i,i , In the numerical experimentation, we have considered different (n + 1) × (n + 1) Gram matrices corresponding to Bernstein bases and Bernstein bases of negative degree and different (m+1)×(m+1) submatrices of the Bernstein mass matrix M r,l where m = n−r −l. The numerical results illustrate the accuracy of the computations for the considered dimensions n + 1 = 10, 15, 20, 25. The authors will provide upon request the software with the implementation of the above mentioned routines.
The 2-norm condition number of the considered Gram matrices has been obtained by means of the Mathematica command Norm[A,2]· Norm[Inverse[A],2] and is shown in Table 1. We can clearly observe that the condition numbers significantly   increase with the dimension of the matrices. This explains that traditional methods do not obtain accurate solutions when solving the aforementioned algebraic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [23]. In our first numerical example we have computed the eigenvalues and singular values of the considered Gram matrices. Let us notice that Gram matrices of Bernstein bases of positive and negative degree are STP and then, by Theorem 6.2 of [2], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.
We have computed the eigenvalues and singular values of the considered Gram matrices with the following algorithms:  The values provided by Mathematica have been considered as the exact solution of the algebraic problem and the relative error e of each approximation has been computed as e := |a −ã|/|a|, where a denotes the eigenvalue or singular value computed with Mathematica andã the eigenvalue or singular value computed with Matlab.
In Tables 2 and 3, the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our methods provide very accurate results in contrast to the non accurate results provided by the Matlab commands eig and svd.
On the other hand, in our second experiment, we have computed the inverse matrix of the considered Gram matrices with the following algorithms: To look over the errors we have compared both Matlab approximations with the inverse matrix A −1 computed by Mathematica using 100-digit arithmetic, taking into account the formula e = A −1 − A −1 2 / A −1 2 for the corresponding relative error. The obtained relative errors are shown in Table 4. Observe that the relative  The vector provided by Mathematica has been considered as the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlabc, taking into account the formula e = c −c 2 / c 2 .
In Table 5, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed methods preserve the accuracy, which does not considerably increase with the dimension of the system in contrast with the results obtained with the Matlab command \.