1 Introduction

Bernstein polynomials have a wide range of helpful properties. As a result, Bernstein polynomials hold an enormous practical utility in the field of computer-aided geometric design (CAGD) as well as in other numerous fields of Mathematics (see [5, 6, 13, 15] and the references in there). The definition of Bezier curves and surfaces which can be used to approximate any curve or surface to a high degree of accuracy is provided by the Bernstein basis of polynomials. The Bernstein basis is normalized and totally positive on its natural domain. Actually, it has optimal shape preserving [8] and [14] stability properties.

Moreover, Bernstein bases comprise several and essential applications apart from CAGD. By way of example, these bases have been used in Galerkin and collocation methods in order to solve elliptic and hyperbolic partial differential equations (cf. [5, 6]). They are also helpful in stochastic dynamics (cf. [24]), in optimal control theory applications (cf. [34]), as well as in the modelling of chemical reactions (cf. [7]). Additionally, Bernstein polynomials play a crucial role in approximation theory since they make possible to prove the Weierstrass approximation theorem (see [4]).

Bernstein bases of negative degree were introduced in [19]. They share many properties with the polynomial Bernstein bases, such as they form a partition of unity, satisfy Descartes’ Law of Sign, and possess recurrence relations as well as two-term formulae for degree elevation and differentiation. What is more, negative degree Bernstein bases are totally positive on their natural domain as well (see [30]). This kind of bases, by contrast with polynomial Bernstein bases, are able to embody arbitrary functions that are analytic in a neighborhood of zero and uniformly approximate all continuous functions which vanish at minus infinity.

Despite the nice properties of Bernstein bases, they are not orthogonal. In the polynomial case, to overcome this inconvenience, the Bernstein polynomial basis is often transformed into an orthogonal polynomial basis by means of a transformation matrix, which is a Gram (or mass) matrix. The inversion of Bernstein mass matrices for the resolution of the linear system of normal equations is required when approximating, in the least-squares sense, curves by Bézier curves, that is, by linear combinations of control points and Bernstein polynomials (see [3] and [27,28,29]). On the other hand, many degree reduction methods also require an inversion of these matrices, and they will become more efficient if the matrix inverses are explicitly expressed. Several approaches for the inversion of mass matrices can be found in [1].

Unfortunately, as the degree of the polynomial basis increases, the transformation matrix become ill-conditioned. Consequently, computations with high relative accuracy (HRA) using Bernstein mass matrices is an important issue in numerical linear algebra. The accurate computation with structured classes of matrices is receiving increasing attention in the recent years (cf. [9,10,11, 31]). For totally positive matrices, it usually requires the explicit computation of a bidiagonal factorization. In fact, if we achieve the computation of this factorization with HRA, we can apply the algorithms presented in [21,22,23] to solve with HRA algebraic problems such as the computation of the matrix inverse, the computation of the eigenvalues and singular values, or the resolution of some systems of linear equations associated to the matrix.

In this paper Gram matrices of Bernstein bases of positive and negative degree are considered. It is proved that these matrices are strictly totally positive and, using a Neville elimination procedure (see [16,17,18]), a bidiagonal factorization of them is deduced. By means of the proposed factorization, the resolution of the above mentioned algebraic problems can be achieved with HRA and consequently, the accuracy of the obtained solutions does not considerably decrease with the dimension of the matrix, as happens with the traditional methods.

We now describe the layout of the paper. In Section 2, we provide some basic notations and preliminary results. In Section 3, Gram matrices of polynomial Bernstein bases are considered. It is shown that these matrices are STP and a bidiagonal factorization for the resolution with HRA of related algebraic problems is proposed. Furthermore, the bidiagonal factorization of the principal submatrices of Bernstein mass matrices is also provided. Section 4 proves that the Gram matrices of Bernstein bases of negative degree are also STP and the bidiagonal factorization to derive accurate algorithms is deduced. Finally, Section 5 presents numerical experiments confirming the accuracy of the presented methods for the computation of eigenvalues, singular values, inverses or the solution of some linear systems related to Gram matrices of the considered bases.

2 Notations and auxiliary results

A matrix is totally positive: TP (respectively, strictly totally positive: STP) if all its minors are nonnegative (respectively, positive). Several applications of these matrices can be found in [2, 12, 33].

The Neville elimination (NE) is an alternative procedure to Gaussian elimination (see [16,17,18]). Given an (n + 1) × (n + 1), nonsingular matrix A = (ai,j)1≤i,jn+ 1, the NE process calculates a sequence of matrices

$$ A^{(1)} :=A\to A^{(2)} \to {\cdots} \to A^{(n+1)}, $$
(1)

so that the entries of A(k+ 1) below the main diagonal in the k first columns, 1 ≤ kn, are zeros and so, A(n+ 1) is upper triangular. The matrix \(A^{(k+1)} =(a_{i,j}^{(k+1)})_{1\le i,j\le n+1}\) is computed from \(A^{(k)}=(a_{i,j}^{(k)})_{1\le i,j\le n+1}\) by means of the following relations

$$ a_{i,j}^{(k+1)}:=\begin{cases} a^{(k)}_{i,j}, \quad &\text{if} \ 1\le i \le k, \\ a^{(k)}_{i,j}- \frac{a_{i,k}^{(k)}}{ a_{i-1,k}^{(k)} } a^{(k)}_{i-1,j} , \quad &\textrm{if } k+1\le i,j\le n+1 \textrm{ and } a_{i-1,k}^{(k)}\ne 0,\\ a^{(k)}_{i,j}, \quad &\text{if} \quad k+1\le i\le n+1 \textrm{ and } a_{i-1,k}^{(k)}= 0. \end{cases} $$
(2)

The (i,j) pivot of the NE process of the matrix A is

$$ p_{i,j} := a_{i,j}^{(j)}, \quad 1\le j\le i\le n+1, $$
(3)

and, in particular, we say that pi,i is the i th diagonal pivot. Let us observe that whenever all pivots are nonzero, no row exchanges are needed in the NE procedure.

The (i,j) multiplier of the NE process of the matrix A is

$$ m_{i,j}:=\begin{cases} a_{i,j}^{(j)} / a_{i-1,j}^{(j)}={ p_{i,j} }/{ p_{i-1,j}}, & \textrm{if } a_{i-1,j}^{(j)} \ne 0,\\ \strut 0, & \textrm{if } a_{i-1,j}^{(j)} = 0, \end{cases} ,\quad 1\le j < i\le n+1. $$
(4)

NE is a nice tool to deduce that a given matrix is STP, as shown in this characterization derived from Corollary 5.5 of [16] and the arguments of p. 116 of [8].

Theorem 1

A given matrix A is STP if and only if the Neville elimination of A and AT can be performed without row exchanges, all the multipliers of the Neville elimination of A and AT are positive, and the diagonal pivots of the Neville elimination of A are all positive.

By Theorem 4.2 and the arguments of p. 116 of [18], a nonsingular STP matrix A = (ai,j)1≤i,jn+ 1 admits a factorization of the form

$$ A=F_{n}F_{n-1}{\cdots} F_{1} D G_{1} {\cdots} G_{n-1} G_{n}, $$
(5)

where Fi and Gi, i = 1,…,n, are the TP, lower and upper triangular bidiagonal matrices given by

$$ \begin{array}{@{}rcl@{}} F_{i}&=&\left( \begin{array}{cccccccccc} 1 \\ 0 & 1 \\ & {\ddots} & {\ddots} \\ & & & 0 & 1 \\ & & & & m_{i+1,1} & 1 \\ & & & & & m_{i+2,2} & 1 \\ & & & & & & {\ddots} & {\ddots} \\ & & & & & & & m_{n+1,n+1-i} & 1 \end{array} \right), \\ {G_{i}^{T}}&=&\left( \begin{array}{cccccccccc} 1 \\ 0 & 1 \\ & {\ddots} & {\ddots} \\ & & & 0 & 1 \\ & & & & \widetilde{m}_{i+1,1} & 1 \\ & & & & & \widetilde{m}_{i+2,2} & 1 \\ & & & & & & {\ddots} & {\ddots} \\ & & & & & & & \widetilde{m}_{n+1,n+1-i} & 1 \end{array} \right), \end{array} $$
(6)

and \(D=\text {diag}\left (p_{1,1}, p_{2,2},\ldots , p_{n+1,n+1}\right )\) has positive diagonal entries.

The diagonal entries pi,i of D are the positive diagonal pivots of the Neville elimination of A and the elements mi,j and \(\widetilde {m}_{i,j}\) are the multipliers of the Neville elimination of A and AT, respectively. If, in addition, the entries mij, \(\widetilde {m}_{ij}\) satisfy

$$ m_{ij} = 0 \quad \Rightarrow \quad m_{hj} = 0, \forall h>i \quad \text{and}\quad \widetilde{m}_{ij} = 0 \quad \Rightarrow \quad \widetilde{m}_{ik}=0, \forall k\!>\!j, $$
(7)

then the decomposition (5) is unique.

In [21], the bidiagonal factorization (5) of an (n + 1) × (n + 1) nonsingular and TP matrix A is represented by defining a matrix BDA(A) = (BDA(A)i,j)1≤i,jn+ 1 such that

$$ BDA(A)_{i,j}:=\begin{cases} m_{i,j}, & \textrm{if } i>j, \\ p_{i,i}, & \textrm{if } i=j, \\ \widetilde{m}_{j ,i}, & \textrm{if } i<j. \\ \end{cases} $$
(8)

Remark 1

Using the results in [16,17,18], given the bidiagonal factorization (5) of a TP matrix A, a bidiagonal decomposition of A− 1 can be computed as

$$ A^{-1}= \widetilde{G}_{1}\cdots\widetilde{G}_{n-1} \widetilde{G}_{n} D^{-1}\widetilde{F}_{n} \widetilde{F}_{n-1}\cdots\widetilde{F}_{1}, $$
(9)

where \(\widetilde {F}_{i}\) and \(\widetilde {G}_{i}\), is the lower and upper bidiagonal matrix of the form (6), obtained when replacing the off-diagonal entries {mi+ 1,1,…,mn+ 1,n+ 1−i} and \(\{\tilde {m}_{i+1,1},{\ldots } , \tilde {m}_{n+1,n+1-i}\} \) by {−mi+ 1,i,…,−mn+ 1,i} and \(\{-\tilde {m}_{i+1,i},{\ldots } ,-\) \( \tilde {m}_{n+1,i}\}\), respectively.

Remark 2

Let us also observe that if A is a nonsingular and STP matrix, then AT is also nonsingular and STP. Moreover, the bidiagonal decomposition of AT can be computed as

$$ A^{T}={G}_{n}^{T}{G}_{n-1}^{T}{\cdots} {G}_{1}^{T} D {F}_{1}^{T} {\cdots} {F}_{n-1}^{T}{F}_{n}^{T}, $$
(10)

where Fi and Gi, i = 1,…,n, are the lower and upper triangular bidiagonal matrices given in the bidiagonal factorization (5). Furthermore, if the nonsingular and STP matrix A is symmetric then (7) is satisfied and, taking into account the unicity of the factorization, we immediately deduce that \(G_{i}={F_{i}^{T}}\), i = 1,…,n, and consequently, the bidiagonal decomposition (5) satisfies

$$ A=F_{n}F_{n-1}{\cdots} F_{1} D {F_{1}^{T}} {\cdots} F_{n-1}^{T} {F_{n}^{T}}, $$

where the matrices Fi, i = 1,…,n, are the lower triangular bidiagonal matrices described in (6), whose off-diagonal entries coincide with the positive multipliers of the Neville elimination of A and D is the is the diagonal matrix with the positive pivots of the Neville elimination of A.

Finally, let us recall that a real x is computed with high relative accuracy (HRA) whenever the computed value \(\tilde x\) satisfies

$$ \frac{ \| x-\tilde x \| }{ \| x \| } < Ku, $$

where u is the unit round-off and K > 0 is a constant, which is independent of the arithmetic precision. Clearly, HRA implies a great accuracy since the relative errors in the computations have the same order as the machine precision. A sufficient condition to assure that an algorithm can be computed with HRA is the non inaccurate cancellation condition, sometimes denoted as NIC condition, which is satisfied if the algorithm only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf. [11, 21]).

If the bidiagonal factorization (5) of a nonsingular and TP matrix A can be computed with HRA, the computation of its eigenvalues and singular values, the computation of A− 1 and even the resolution of systems of linear equations Ax = b, for vectors b with alternating signs, can be also computed with HRA using the algorithms provided in [22].

3 Total positivity and accurate computations with Gram matrices of polynomial Bernstein bases

The Bernstein basis of the space Pn[0,1] of polynomials of degree less than or equal to n on the interval [0,1] is \(({B^{n}_{0}},\ldots , {B_{n}^{n}})\) with

$$ {B^{n}_{i}}(t):={n \choose i} t^{i} (1-t)^{n-i},\quad i=0,\ldots,n. $$
(11)

These basis functions belong to the vector space L2[0,1] of square integrable functions, which is a Hilbert space under the following inner product

$$ \left< f,g\right>:= {{\int}_{0}^{1}} t^{\alpha} (1-t)^{\beta} f(t)g(t) dt, \quad \alpha,\beta >-1. $$
(12)

The corresponding Gram matrix is the symmetric matrix M = (Mi,j)1≤i.jn+ 1 where

$$ \begin{array}{@{}rcl@{}} M_{i,j} &=& {{\int}_{0}^{1}} t^{\alpha} (1-t)^{\beta} B^{n}_{i-1}(t) B^{n}_{j-1}(t) dt \\ &=&{n \choose i-1} {n \choose j-1} \frac{ {\Gamma}(i+j+\alpha-1){\Gamma} (2n-i-j+\beta+3) }{\Gamma(2n+\alpha+\beta+2)}, \end{array} $$
(13)

and Γ(x) denotes the well-known Gamma function.

Let us recall that the dual Bernstein basis of Pn[0,1] is the polynomial basis \(({D^{n}_{0}},\ldots , {D_{n}^{n}})\) satisfying

$$ \langle {D^{n}_{i}}, {B^{n}_{j}}\rangle= {{\int}_{0}^{1}} t^{\alpha} (1-t)^{\beta} {D^{n}_{i}}(t) {B^{n}_{j}}(t) dt= \delta_{i,j}, $$

for i,j = 0,…,n (cf. [25, 26, 35, 36]) and then

$$ ({B^{n}_{0}},\ldots, {B^{n}_{n}})^{T} = M ({D^{n}_{0}},\ldots, {D^{n}_{n}})^{T}, $$

with M described by (13) (see Lemma 1 of [29]). That means that the Gram matrix M in (13) is the change of basis matrix from the dual Bernstein basis to the Bernstein basis.

Now, let \({\mathbf P}^{n}_{r,l}[0, 1]=\text {span}\{{B^{n}_{r}},\ldots ,B^{n}_{n-l}\}\), with r,l ≥ 0 and r + ln, be the space formed by all polynomials of degree at most n, defined on [0,1], whose derivatives of order at most r − 1 at t = 0 and of order at most l − 1 at t = 1 vanish.

The constrained dual Bernstein basis of degree n is \((D^{(n,r,l)}_{r},\ldots , D^{(n,r,l)}_{n-l})\subset {\mathbf P}^{n}_{r,l}[0, 1]\) such that \(\langle D^{(n,r,l)}_{i}, {B^{n}_{j}}\rangle =\delta _{i,j}\) for i,j = r,…,nl. The principal submatrix of the Gram matrix M in (13), Mr,l = (Mi,j)r+ 1≤i,jnl+ 1, satisfies

$$ ({B^{n}_{r}},\ldots, B^{n}_{n-l})^{T} = M^{r,l} ({D^{n}_{r}},\ldots, D^{n}_{n-l})^{T}. $$
(14)

Taking into account (13), Mr,l can be described as Mr,l = (Mi,j)1≤i.jm+ 1, with m =: nrl and

$$ M_{i,j} = {n \choose r+i - 1} {n \choose r+j - 1} \frac{ {\Gamma}(2r+i+j+\alpha - 1) {\Gamma}(2n - 2r - i - j+\beta+ 3) }{\Gamma(2n+\alpha+\beta+ 2) }. $$
(15)

The following result proves that the Gram matrices Mr,l with respect to the inner product (12) are STP and provides the multipliers and the diagonal pivots of their Neville elimination. As we shall show, the corresponding bidiagonal factorization (5) will provide accurate computations related to these matrices. We are going to use the following generalization of combinatorial numbers. Given \(\alpha \in \mathbb R\) and \(n\in \mathbb N\),

$$ {\alpha\choose n}:=\frac{ \alpha (\alpha-1){\cdots} (\alpha-n+1)}{n!},\quad {\alpha\choose \alpha-n}:={\alpha\choose n}. $$
(16)

Theorem 2

For \(n \in {\mathbb {N}}\) and integers r,l ≥ 0 with r + ln, let m := nrl. Then, the (m + 1) × (m + 1) Gram matrix Mr,l described by (15) is STP. Moreover, Mr,l admits a factorization of the form (5) such that

$$ M^{r,l}=F_{m}F_{m-1}{\cdots} F_{1} D G_{1} {\cdots} G_{m-1} G_{m}, $$
(17)

where Fi and Gi, i = 1,…,m, are the lower and upper triangular bidiagonal matrices given by (6) and \(D=\text {diag}\left (p_{1,1},\ldots , p_{m+1,m+1}\right )\). The off-diagonal entries mi,j and \(\widetilde {m}_{i,j}\), 1 ≤ j < im + 1 are given by

$$ \begin{array}{@{}rcl@{}} &&m_{i,j}=\frac{(n-r-i+2) (2r+i+\alpha-1) (2n-2r-i+\beta+3)}{ (r+i-1)(2n-2r-i-j+\beta+3) (2n-2r-i-j+\beta+4)}, \\ && \widetilde{m}_{i,j} = {m}_{i,j}. \end{array} $$
(18)

Moreover, the diagonal entries pi,i, 1 ≤ im + 1, satisfy

$$ \begin{array}{@{}rcl@{}} p_{1,1}\!&=&\!{n\choose r}^{2} \frac{\Gamma(2r+\alpha+1) {\Gamma}(2n-2r+\beta+1) }{\Gamma(2n+\alpha+\beta+2)}, \\ p_{i+1,i+1} \!&=&\! \frac{i (n - r - i + 1)^{2} (2r + i + \alpha) (2 n - i+\alpha+\beta + 2) (2 n - 2r - i + \beta + 2) }{(r + i)^{2} (2 n - 2r - 2i + \beta + 1) (2 n - 2r - 2i+\beta + 2)^{2}(2 n - 2r - 2i + \beta + 3)}\\&& p_{i,i}, \end{array} $$
(19)

for 1 ≤ im.

Proof

Let M(1) := Mr,l and \(M^{(k)} =(M_{ij}^{(k)})_{1\leq i,j \leq m+1}\), k = 2,…,m + 1, be the matrices obtained after k − 1 steps of the NE process of Mr,l. First, let us see by induction on k that

$$ \begin{array}{@{}rcl@{}} M_{i,j}^{(k)}&=& {n \choose r+i-1}{n\choose r+j-1} {j-1\choose k-1}\\&& \frac{ {\Gamma}(2r+i+j+\alpha-k){\Gamma} (2n-2r-i-j+\beta+3) }{{2n-2r-i+\beta+2\choose k-1} {\Gamma}(2n+\alpha+\beta-k+3) } , \end{array} $$
(20)

for 1 ≤ i,jm + 1. By (15),

$$ M_{i,j}^{(1)} = {n \choose r+i - 1} {n \choose r+j - 1} \frac{ {\Gamma}(2r+i+j+\alpha - 1) {\Gamma}(2n - 2r-i - j+\beta+ 3) }{\Gamma(2n+\alpha+\beta+ 2) } $$

and (20) holds for k = 1. Let us now suppose that (20) holds for some k ∈{1,…,m}. Then, we have that

$$ \frac{M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}= \frac{ {n\choose r+ i-1}}{ {n\choose r+ i-2} } \frac{ {2n-2r-i+\beta+3\choose k-1}}{{2n-2r-i+\beta+2\choose k-1} } \frac{ {\Gamma}(2r+i+\alpha) {\Gamma}(2n-2r-i+\beta-k+3) }{\Gamma(2r+i+\alpha-1) {\Gamma}(2n-2r-i+\beta-k+4) } . $$

Now, taking into account the following property of the Gamma function

$$ {\Gamma}(x+1)=x{\Gamma}(x), $$
(21)

it can be easily checked that

$$ \frac{M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}= \frac{ {n\choose r+ i-1}}{ {n\choose r+ i-2} } \frac{{2n-2r-i+\beta+3\choose k-1}}{{2n-2r-i+\beta+2\choose k-1} } \frac{ (2r+i+\alpha-1) }{ (2n-2r-i+\beta-k+3) }, $$

for i = k + 1,…,m + 1. Using the generalization (16) of combinatorial numbers, we have the following identities

$$ \frac{{n\choose r+i-1}}{ {n\choose r+i-2} } =\frac{n-r-i+2}{r+i-1},\quad \frac{{2n-2r-i+\beta+3\choose k-1}}{{2n-2r-i+\beta+2\choose k-1} }= \frac{2n-2r-i+\beta+3}{2n-2r-i+\beta-k+4}, $$

and then, we can write

$$ \begin{array}{@{}rcl@{}} \frac{M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}&=& \frac{(n-r-i+2) (2n-2r-i+\beta + 3) (2r+i+\alpha -1) }{ (r+ i-1) (2n-2r-i +\beta-k +4) (2n-2r-i+\beta-k +3) },\\&& i=k+1,\ldots,m+1. \end{array} $$
(22)

Since \(M_{i,j}^{(k+1)}=M_{i,j}^{(k)}- \frac {M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}M_{i-1,j}^{(k)}\), using (21) and (22), we have

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! M_{i,j}^{(k+1)} &=& {n\choose r+i-1}{n\choose r+j-1} {j-1\choose k-1} \times \\ &&\times \frac{ {\Gamma}(2r+ i+j+\alpha-k -1) {\Gamma}(2n-2r-i-j+\beta+3) }{{2n-2r-i+\beta+2\choose k-1}{\Gamma}(2n +\alpha+\beta-k+3) } {C}_{i,j}^{(k)}, \end{array} $$
(23)

with

$$ \begin{array}{@{}rcl@{}} {C}_{i,j}^{(k)}&:=& 2r+ i+j+\alpha-k -1 - \frac{ (2r+i+ \alpha-1) (2n-2r-i-j+\beta+3)}{ 2n-2r-i+\beta-k+3 } \\ &=& \frac{ (j-k) (2n+\alpha+\beta -k+2)}{ 2n-2r-i+ \beta-k+ 3 }. \end{array} $$
(24)

Finally, from (23), (24) and the equalities

$$ \begin{array}{@{}rcl@{}} (j - k) {j-1\choose k-1}\!&=&\! k{j - 1\choose k }, \quad (2n-2r-i+\beta-k+3) { 2n-2r-i+\beta +2\choose k -1}\\\!&=&\! k{2n-2r-i + \beta+2\choose k }, \end{array} $$

we deduce that

$$ \begin{array}{@{}rcl@{}} M_{i,j}^{(k+1)} &=& {n\choose r+i-1} {n\choose r+j-1} {j-1\choose k }\\&& \frac{ {\Gamma} (2r+i+j+\alpha-k-1) {\Gamma}(2n-2r-i-j+\beta+ 3) }{ {2n-2r-i+\beta+2\choose k}{\Gamma}(2n+\alpha+\beta -k+2) }, \end{array} $$
(25)

and formula (20) also holds for k + 1.

By (3) and (20), we can easily deduce that the pivots pi,j of the NE of M satisfy

$$ \begin{array}{@{}rcl@{}} p_{i,j}&=&M_{i,j}^{(j)}= {n\choose r+i-1}{n\choose r+j-1}\\&& \frac{ {\Gamma}(2r+i+\alpha) {\Gamma}(2n-2r-i-j+\beta+3) }{ {\Gamma}(2n-j+\alpha+\beta+3){2n-2r-i+\beta+2\choose j-1} }, \end{array} $$
(26)

for 1 ≤ jim + 1. Then, for the particular case i = j,

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\!\!\!p_{i,i} = {n\choose r+i - 1}^{2} \frac{ {\Gamma}(2r+i+\alpha) {\Gamma}(2n - 2r - 2i +\beta+3) } { {\Gamma}(2n-i+\alpha+\beta+3) {2n-2r-i+\beta+2\choose i-1}}, \quad 1\le i\le m+1. \end{array} $$
(27)

Then, it can be easily checked that

$$ p_{1,1}= {n\choose r }^{2} \frac{\Gamma(2r+\alpha+1) {\Gamma}(2n-2r+\beta+1) }{\Gamma(2n+\alpha+\beta+2)}, $$

and

$$ \frac{ p_{i+1,i+1}}{p_{i,i}} = \frac{i (n - r - i + 1)^{2} (2r + i + \alpha) (2 n - i+\alpha+\beta+2) (2 n - 2r - i+\beta+2) }{(r + i)^{2} (2 n - 2r - 2i + \beta + 1) (2 n - 2r - 2i + \beta + 2)^{2}(2 n - 2r - 2i + \beta + 3)}, $$

and so, (19) holds. By formula (26), we see that the pivots of the NE of Mr,l are positive. Finally, using (4) and (26), the multipliers mi,j, 1 ≤ j < im + 1, can be written as

$$ \begin{array}{@{}rcl@{}} m_{i,j} = \frac{ p_{i,j}}{p_{i-1,j} } = \frac{(n-r-i+2) (2r+i+\alpha-1) (2n-2r-i+\beta+3)}{ (r+i-1)(2n-2r-i-j+\beta+3) (2n-2r-i-j+\beta+4)}. \end{array} $$

Taking into account that Mr,l is a symmetric matrix, we conclude that \(\widetilde m_{i,j}=m_{i,j}\), 1 ≤ j < im + 1. Clearly, the pivots and multipliers of the NE of Mr,l are positive and so, Mr,l is STP. □

Let us notice that, from Theorem 2, the bidiagonal factorization (5) of the Gram matrix Mr,l described by (15) can be represented by means of the (m + 1) × (m + 1) matrix BD(Mr,l) = (BD(Mr,l)i,j)1≤i,jm+ 1 such that

$$ BD(M^{r,l})_{i,j}:=\begin{cases} \frac{(n-r-i+2) (2r+i+\alpha-1) (2n-2r-i+\beta+3)}{ (r+i-1)(2n-2r-i-j+\beta+3) (2n-2r-i-j+\beta+4)} , & \textrm{if } i>j, \\ {n\choose r+i-1}^{2} \frac{ {\Gamma}(2r+i+\alpha) {\Gamma}(2n-2r-2i +\beta+3) } { {\Gamma}(2n-i+\alpha+\beta+3) {2n-2r-i+\beta+2\choose i-1}}, & \textrm{if } i=j, \\ \frac{(n-r-j+2) (2r+j+\alpha-1) (2n-2r-j+\beta+3)}{ (r+j-1)(2n-2r-i-j+\beta+3) (2n-2r-i-j+\beta+4)}, & \textrm{if } i<j. \end{cases} $$
(28)

The following result provides the conditions on α, β so that Mr,l and (Mr,l)− 1 can be computed with HRA.

Corollary 1

For \(n \in {\mathbb {N}}\) and integers r,l ≥ 0 with r + ln, let m := nrl. For any α,β > − 1 such that Γ(α + 1), Γ(β + 1) and Γ(α + β + 2) can be evaluated with HRA, the (m + 1) × (m + 1) matrix Mr,l and its inverse matrix (Mr,l)− 1 can also be computed with HRA.

Proof

From Theorem 2, we deduce that the matrices Mr,l and (Mr,l)− 1 can be computed with HRA for any α,β > − 1 such that the values Γ(2r + α + 1), Γ(2n − 2r + β + 1) and Γ(2n + α + β + 2) in (19) can be computed with HRA. Then, having in mind the following identity

$$ {\Gamma}(x+n)=\left( {\prod}_{k=0}^{n-1}(x+k) \right){\Gamma} (x), $$

the result immediately holds. □

Let us note that the conditions in Corollary 1 are satisfied when \(\alpha , \beta \in {\mathbb {N}} \cup \{0\}\) since, in this case, Γ(α + 1) = α!, Γ(β + 1) = β!, Γ(α + β + 2) = (α + β + 1)!.They also hold when α,β ∈{− 1/2,1/2}, corresponding to four Chebyshev-type weights, since

$$ {\Gamma}(n+1/2)= \frac{(2n)!}{4^{n}n!}\sqrt{\pi}, \quad n\in{\mathbb{N}}. $$

Finally, let us observe that when α = 0 and β = 0, the Gram matrix of the Bernstein basis (11), corresponding to the inner product

$$ \left< f,g\right> = {{\int}_{0}^{1}} f(t)g(t) dt, $$

is M = (Mi,j)1≤i.jn+ 1 with

$$ M_{i,j} = {n \choose i-1} {n \choose j-1} \frac{ (i+j -2)! (2n-i-j+ 2)! }{(2n+ 1)!}, $$
(29)

and it is usually called Bernstein mass matrix (see [1] or [20]).

Then, as a direct consequence of Theorem 2, we can deduce that the Bernstein mass matrix given by (29) is STP and its bidiagonal factorization can be represented by means of the (n + 1) × (n + 1) matrix BD(M) = (BD(M)i,j)1≤i,jn+ 1 such that

$$ BD(M)_{i,j}:=\begin{cases} \frac{(n-i+2)(2n-i+3)}{ (2n-i-j-3) (2n-i-j-4)} , & \textrm{if } i>j, \\ \left( {n\choose i-1}\frac{ (i-1)! }{ (2n-i+2)!} \right)^{2} (2n-2i+2)!(2n-2i -3)! , & \textrm{if } i=j, \\ \frac{(n-j-2)(2n-j+3)}{ (2n-i-j-3) (2n-i-j+4)}, & \textrm{if } i<j. \end{cases} $$
(30)

The matrix BD(M) can be immediately obtained by replacing α = 0, β = 0, r = 0 and l = 0 in (28).

Section 5 shows accurate results obtained when solving algebraic problems using the proposed bidiagonal factorization and the algorithms presented in [22, 23].

4 Total positivity and accurate computations with Gram matrices of Bernstein bases of negative degree

For a given \(m\in {\mathbb {N}}\), the (n + 1)-dimensional Bernstein basis of degre − m is the basis of rational functions \((B^{-m}_{0}, B^{-m}_{1},\ldots , B_{n}^{-m} )\) defined as

$$ B^{-m}_{i}(t):={-m \choose i} t^{i} (1-t)^{-m-i} $$
(31)

with

$$ {-m \choose i} = \frac{(-m)(-m-1){\cdots} (-m-i+1) }{i!} = (-1)^{i} {m+i-1 \choose i},\quad i=0,1,{\ldots} , $$
(32)

(see (16)). From formulae (31) and (32), we can easily deduce that the Bernstein basis functions of negative degree are non-negative over the interval \((-\infty , 0]\). Furthermore, they are linearly independent rational functions and form a partition of unity (cf. [19]). These functions belong to the vector space \(L_{2}(-\infty ,0)\) of square integrable functions, which is a Hilbert space under the inner product

$$ \left< f,g\right>:= {\int}_{-\infty}^{0} f(t)g(t) dt. $$
(33)

It can be checked that the corresponding Gram matrix, M = (Mi,j)1≤i.jn+ 1, can be computed exactly as

$$ M_{i,j} = {\int}_{-\infty}^{0} B^{-m}_{i-1}(t) B^{-m}_{j-1}(t) dt = {m + i - 2 \choose i-1} {m+j - 2 \choose j-1} \frac{ (i+j-2)! (2m-2)! }{(2m+i+j-3)!}, $$
(34)

for 1 ≤ i,jn + 1.

The following result proves that Gram matrices of Bernstein bases of negative degree are STP and provides the multipliers and the diagonal pivots of their NE. The corresponding bidiagonal factorization (5) will provide accurate computations related to these matrices.

Theorem 3

The (n + 1) × (n + 1) Gram matrix M given by (34) is STP. Moreover, M admits a factorization of the form (5) such that

$$ M=F_{n}F_{n-1}{\cdots} F_{1} D G_{1} {\cdots} G_{n-1} G_{n}, $$
(35)

where Fi and Gi, i = 1,…,n, are the (n + 1) × (n + 1) lower and upper triangular bidiagonal matrices of the form (6) and \(D=\text {diag}\left (p_{1,1},\ldots , p_{n+1,n+1}\right )\). The entries \(m_{i,j}, \widetilde {m}_{i,j}\) and pi,i are given by

$$ \begin{array}{@{}rcl@{}} m_{i,j} \!&=&\! \frac{(m + i - 2)(2m + i - 3)}{ (2m + i + j - 3) (2m + i + j - 4)} ,\quad \widetilde{m}_{i,j} = {m}_{i,j},\quad 1\!\le\! j\!<\!i\!\le\! n + 1, \end{array} $$
(36)
$$ \begin{array}{@{}rcl@{}} p_{1,1} & = & \frac{1}{2m - 1},\quad p_{i+1,i+1}\!:=\! \frac{(2 m + i-2)^{2}}{ 4 (2m + 2i - 1) (2m + 2 i - 3)} p_{i,i} ,\quad 1\!\le\! i\! \le\! n. \end{array} $$
(37)

Proof

Let M(1) := M and \(M^{(k)} =(M_{ij}^{(k)})_{1\leq i,j \leq n+1}\), k = 2,…,n + 1, be the matrices obtained after k − 1 steps of the NE of M. First, let us see by induction on k that

$$ \begin{array}{@{}rcl@{}} M_{i,j}^{(k)}&=&{m+i-2\choose i-1}{m+j-2\choose j-1} {j-1\choose k-1} {2m+i+k-4\choose k-1}^{-1}\\&& \frac{ (i+j-k-1)! (2m+k-3)! }{ (2m+i+j-3)! }, \end{array} $$
(38)

for 1 ≤ i,jn + 1. For k = 1, \(M_{i,j}^{(1)}= {m+i-2 \choose i-1} {m+j-2 \choose j-1} (i+j-2)! (2m-2)! / (2m+i+j-3)! \) and (38) holds. Let us now suppose that (38) holds for some k ∈{1,…,n}. Then, it can be easily checked that

$$ \frac{M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}= \frac{ (2m+i-3) (m+i-2) }{ (2m+i+k-3) (2m+i+k-4) }, \quad i=k+1,\ldots,n+1. $$

Since \(M_{i,j}^{(k+1)}=M_{i,j}^{(k)}- \frac {M_{i,k}^{(k)}}{M_{i-1,k}^{(k)}}M_{i-1,j}^{(k)}\), taking into account the following equalities

$$ \begin{array}{@{}rcl@{}} && (m+i-2){m+i-3 \choose i-2} = (i-1){m+i-2 \choose i-1}, \\ && (2m+i+k-4){2m+i+k-5\choose k-1}= (2m+i-3){2m+i+k-4\choose k-1}, \end{array} $$

we can write

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\!\!\!\!\!\!M_{i,j}^{(k+1)} \!&=&\! M_{i,j}^{(k)}- \frac{ (2m+i-3) (m+i-2) }{ (2m+i+k-3) (2m+i+k-4) } M_{i-1,j}^{(k)} \\ \!&=&\! {m+i - 2\choose i-1} {m+j - 2\choose j-1} {j - 1\choose k - 1}\frac{ (i + j - k - 2)! (2m+k - 3)! }{ (2m+i+j-4 )! } C_{i,j}^{(k)}, \end{array} $$
(39)

where

$$ \begin{array}{@{}rcl@{}} C_{i,j}^{(k)}\!&:=&\! \frac{ i+j-k-1 }{ (2m + i + j - 3){2m+i+k-4\choose k-1}} - \frac{ (i-1)(2m+i-3) }{ (2m + i + k - 3) (2m+i+k - 4){2m+i+k-5\choose k-1} } \\ \!&=&\! {2m+i+k-4\choose k-1}^{-1} \left( \frac{ i+j-k-1 }{ 2m+i+j-3 } - \frac{ i-1 }{ 2m+i+k-3 }\right) \\ &=& {2m+i+k-4\choose k-1}^{-1} \frac{ (j-k) (2m+k-2) }{ (2m+i+j-3)(2m+i+k-3) }. \end{array} $$
(40)

Finally, from (39), (40), and the identities

$$ \begin{array}{@{}rcl@{}} (j-k){j-1\choose k-1}&=&k {j-1\choose k },\quad (2m+k+i-3){2m+k+i-4 \choose k-1}\\&=&k {2m+k+i-3\choose k } \end{array} $$

we deduce that

$$ \begin{array}{@{}rcl@{}} M_{i,j}^{(k+1)} &=& {m+i-2\choose i-1}{m+j-2\choose j-1} {j-1\choose k } {2m+i+k-3\choose k }^{-1}\\&& \frac{ (i+j-k-2)! (2m+k-2)! }{ (2m+i+j-3)! } , \end{array} $$
(41)

and formula (38) also holds for k + 1.

Now, by (3) and (38), we can easily deduce that the pivots pi,j of the NE of M satisfy

$$ \begin{array}{@{}rcl@{}} p_{i,j}&=M_{i,j}^{(j)}= {m+i-2\choose i-1}{m+j-2\choose j-1}{2m+i+j-4\choose j-1}^{-1} \frac{ (i-1)! (2m+j-3)! }{ (2m+i+j-3)! }, \end{array} $$
(42)

for 1 ≤ jin + 1. Then, in particular, for i = j we have

$$ p_{i,i} = {m+i-2\choose i-1}^{2} \frac{ \left( (i-1)! (2m+i-3)! \right)^{2} }{ (2m+2i-4)!(2m+2i -3)! } , \quad 1\le i\le n+1. $$
(43)

Then, it can be easily checked that p1,1 = 1/(2m − 1),

$$ \frac{ p_{i+1,i+1}}{p_{i,i}} = \frac{(2m+i-2)^{2}}{ 4 (2m+2i -1) (2m+2 i-3) }, $$

and so, (37) readly follows. Now, let us observe that, by formula (42), the pivots of the NE of M are positive and so, this elimination can be performed without row exchanges.

Finally, using (4) and (42), the multipliers mi,j can be written as

$$ \begin{array}{@{}rcl@{}} m_{i,j}&=\frac{ p_{i,j}}{p_{i-1,j} } = \frac{(m+i-2)(2m+i-3)}{ (2m+i+j-3) (2m+i+j-4)}, \quad 1\le j<i\le n+1. \end{array} $$
(44)

Taking into account that M is a symmetric matrix, we conclude that \(\widetilde m_{i,j}=m_{i,j}\), 1 ≤ j < in + 1. Clearly, the pivots and multipliers of the NE of M are positive and so, M is STP. □

Now, from Theorem 3, the bidiagonal factorization (5) of the (n + 1) × (n + 1) dimensional Gram matrix M of the Bernstein basis of degree − m can be represented by means of the (n + 1) × (n + 1) matrix BD(M) = (BD(M)i,j)1≤i,jn+ 1 such that

$$ BD(M)_{i,j}:=\begin{cases} \frac{(m+i-2)(2m+i-3)}{ (2m+i+j-3) (2m+i+j-4)} , & \textrm{if } i>j, \\ {m+i-2\choose i-1}^{2}\frac{ \left( (i-1)! (2m+i-3)! \right)^{2} }{ (2m+2i-4)!(2m+2i -3)! } , & \textrm{if } i=j, \\ \frac{(m+j-2)(2m+j-3)}{ (2m+i+j-3) (2m+i+j-4)}, & \textrm{if } i<j. \end{cases} $$
(45)

Let us observe that the bidiagonal factorization (5) of the Gram matrix M can be computed with HRA. Consequently, using (9), its inverse matrix can also be computed with HRA as stated in the following result.

Corollary 2

Let M be the (n + 1) × (n + 1) Gram matrix of the Bernstein basis of degree − m described by (34). Then M and M− 1 can be computed with HRA.

Section 5 shows accurate results obtained when solving algebraic problems using the proposed bidiagonal factorization and the algorithms presented in [22, 23].

5 Numerical experiments

Let us suposse that A is an (n + 1) × (n + 1) nonsingular, TP matrix, whose bidiagonal decomposition (5) is represented by means of the matrix BD(A) given in (8). If BD(A) can be computed with HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve of the library TNTools in [23] take as input argument BD(A) and compute with HRA the eigenvalues of A, the singular values of A, its inverse matrix A− 1 (using the algorithm presented in [32]) and the solution of systems of linear equations Ax = b, for vectors b whose entries have alternating signs. The computational cost of the function TNSolve is O(n2) elementary operations. On the other hand, as it can be checked in page 303 of reference [32], the function TNInverseExpand has a computational cost of O(n2) and then improves the computational cost of the computation of the inverse matrix by solving linear systems with TNSolve, taking the columns of the identity matrix as data (O(n3)). The computational cost of the other mentioned functions is O(n3).

For the polynomial Bernstein basis, \(({B^{n}_{0}},\ldots , {B_{n}^{n}})\), using Theorem 2, we have implemented a Matlab function for computing the matrix BD(M), where M is the Gram matrix described in (29) (see (30)) or its principal submatrix Mr,l (see (28)).

For the Bernstein basis of degree − m, \((B_{0}^{-m},\ldots ,B_{n}^{-m})\), considering Theorem 3, we have also implemented a Matlab function, which computes the matrix BD(M)in (45) for the Gram matrix M given in (34).

Observe that, in all cases, the computational complexity in the computation of the entries mi,j, \(\tilde m_{i,j}\), 1 ≤ j < in + 1, is O(n2) and in the computation of pi,i, 1 ≤ in + 1, is O(n).

In the numerical experimentation, we have considered different (n + 1) × (n + 1) Gram matrices corresponding to Bernstein bases and Bernstein bases of negative degree and different (m + 1) × (m + 1) submatrices of the Bernstein mass matrix Mr,l where m = nrl. The numerical results illustrate the accuracy of the computations for the considered dimensions n + 1 = 10,15,20,25. The authors will provide upon request the software with the implementation of the above mentioned routines.

The 2-norm condition number of the considered Gram matrices has been obtained by means of the Mathematica command Norm[A,2]Norm[Inverse[A],2] and is shown in Table 1. We can clearly observe that the condition numbers significantly increase with the dimension of the matrices. This explains that traditional methods do not obtain accurate solutions when solving the aforementioned algebraic problems. In contrast, the numerical results will illustrate the high accuracy obtained when using the bidiagonal decompositions deduced in this paper with the Matlab functions available in [23].

Table 1 Condition number of Bernstein mass matrices (left), submatrices of Bernstein mass matrices with r = 1 and l = 2 (center), and Gram matrices of Bernstein bases of negative degree − m = − 10 (right)

In our first numerical example we have computed the eigenvalues and singular values of the considered Gram matrices. Let us notice that Gram matrices of Bernstein bases of positive and negative degree are STP and then, by Theorem 6.2 of [2], all their eigenvalues are positive and distinct. Furthermore, since Gram matrices are symmetric, their eigenvalues and singular values coincide.

We have computed the eigenvalues and singular values of the considered Gram matrices with the following algorithms:

  • The MATLAB functions TNEigenValues and TNSingularValues taking as argument the matrix representation (8) of the corresponding deduced bidiagonal decomposition (5).

  • The MATLAB’s functions eig and svd.

  • The explicit formula in Theorem 2.1 of [1] for the eigenvalues (consequently, for the singular values) of the Bernstein mass matrices, using Mathematica with a 100-digit arithmetic.

  • Mathematica’s routines Eigenvalues and Singularvalues with a 100-digit arithmetic for computing the eigenvalues and singular values of the submatrices of Bernstein mass matrices and the Gram matrices of the Bernstein bases of negative degree.

The values provided by Mathematica have been considered as the exact solution of the algebraic problem and the relative error e of each approximation has been computed as \(e:=|a-\tilde {a} |/|a|\), where a denotes the eigenvalue or singular value computed with Mathematica and \(\tilde {a}\) the eigenvalue or singular value computed with Matlab.

In Tables 2 and 3, the relative errors of the approximation to the lowest eigenvalue and the lowest singular value of the considered matrices are shown. We can observe that our methods provide very accurate results in contrast to the non accurate results provided by the Matlab commands eig and svd.

Table 2 Relative errors when computing the lowest eigenvalue value of Bernstein mass matrices (left), submatrices of Bernstein mass matrices with r = 1 and l = 2 (center), and Gram matrices of Bernstein bases of negative degree − m = − 10 (right)
Table 3 Relative errors when computing the lowest singular value of Bernstein mass matrices (left), submatrices of Bernstein mass matrices with r = 1 and l = 2 (center), and Gram matrices of Bernstein bases of negative degree − m = − 10 (right)

On the other hand, in our second experiment, we have computed the inverse matrix of the considered Gram matrices with the following algorithms:

  • The MATLAB’s function TNInverseExpand with the corresponding matrix representation (8) of the bidiagonal decomposition (5) as argument.

  • The MATLAB’s routine inv.

  • The explicit formula in Theorem 3.1 of [1] for computing the inverse of the Bernstein mass matrices with Mathematica and a 100-digit arithmetic.

  • Mathematica’s Inverse routine in 100-digit arithmetic for computing the inverse of the submatrices of the Bernstein mass matrices and the inverse of the Gram matrices of Bernstein bases of negative degree.

To look over the errors we have compared both Matlab approximations with the inverse matrix A− 1 computed by Mathematica using 100-digit arithmetic, taking into account the formula \(e=\|A^{-1}-\widetilde {A}^{-1} \|_{2}/\|A^{-1}\|_{2}\) for the corresponding relative error. The obtained relative errors are shown in Table 4. Observe that the relative errors achieved through the bidiagonal decompositions obtained in this paper are much smaller than those obtained with the MATLAB command inv.

Table 4 Relative errors when computing the inverse of Bernstein mass matrices (left), submatrices of Bernstein mass matrices with r = 1 and l = 2 (center), and Gram matrices of Bernstein bases of negative degree − m = − 10 (right)

At last, in our third experiment, we have computed the solutions of the linear systems Mc = d where d = ((− 1)i+ 1di)1≤in+ 1 and di, i = 1,…,n + 1 are random nonnegative integer values.

We have computed the resolution of these systems of linear equations associated to the considered Gram matrices with the next algorithms:

  • The MATLAB’s function TNSolve by using the matrix representation (8) of the proposed bidiagonal decompositions (5).

  • The MATLAB’s command ∖.

  • The Mathematica’s LinearSolve routine in 100-digit arithmetic.

The vector provided by Mathematica has been considered as the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlab \(\tilde {c}\), taking into account the formula \(e=\|c-\tilde c\|_{2}/ \|c\|_{2}\).

In Table 5, the relative errors when solving the aforementioned linear systems for different values of n are shown. Notice that the proposed methods preserve the accuracy, which does not considerably increase with the dimension of the system in contrast with the results obtained with the Matlab command ∖.

Table 5 Relative errors when solving Mc = d with Bernstein mass matrices (left), submatrices of Bernstein mass matrices with r = 1 and l = 2 (center), and Gram matrices of Bernstein bases of negative degree − m = − 10 (right)