On the accurate computation of the Newton form of the Lagrange interpolant

In recent years many efforts have been devoted to finding bidiagonal factorizations of nonsingular totally positive matrices, since their accurate computation allows to numerically solve several important algebraic problems with great precision, even for large ill-conditioned matrices. In this framework, the present work provides the factorization of the collocation matrices of Newton bases -- of relevance when considering the Lagrange interpolation problem -- together with an algorithm that allows to numerically compute it to high relative accuracy. This further allows to determine the coefficients of the interpolating polynomial and to compute the singular values and the inverse of the collocation matrix. Conditions that guarantee high relative accuracy for these methods and, in the former case, for the classical recursion formula of divided differences, are determined. Numerical errors due to imprecise computer arithmetic or perturbed input data in the computation of the factorization are analyzed. Finally, numerical experiments illustrate the accuracy and effectiveness of the proposed methods with several algebraic problems, in stark contrast with traditional approaches.


Introduction
A classical approach to the Lagrange interpolation problem is the Newton form, in which the polynomial interpolant is written in terms of the Newton basis.Its coefficients, called divided differences, can be determined both by means of a recursive formula or by solving a linear system that involves the collocation matrix of the basis.The recursive computation of divided differences requires many subtractions which can lead to cancellation: an error-inducing phenomenon that takes place when two nearly equal numbers are subtracted and that usually elevates the effect of earlier errors in the computations.In fact, this is also the case of the notoriously ill-conditioned collocation matrices that are associated with the Lagrange interpolation problem-these errors can heavily escalate when the number of nodes increases, eventually turning unfeasible the attainment of a solution in any algebraic problem involving these matrices.It is worth noting that sometimes the errors in computing the divided differences can be ameliorated by considering different node orderings [9,28], although this strategy may not be sufficient to obtain the required level of precision in a given high-order interpolation problem.
However, in some scenarios it is possible to keep numerical errors under control.In a given floating-point arithmetic, a real value is said to be determined to high relative accuracy (HRA) whenever the relative error of the computed value is bounded by the product of the unit round-off and a positive constant, independent of the arithmetic precision.HRA implies great accuracy in the computations since the relative errors have the same order as the machine precision and the accuracy is not affected by the dimension or the conditioning of the problem to be solved.A sufficient condition to assure that an algorithm can be computed to HRA is the non inaccurate cancellation condition.Sometimes denoted as NIC condition, it is satisfied if the algorithm does not require inaccurate subtractions and only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of opposite sign or subtraction of initial data (cf.[6], [14]).In this sense, in this paper we provide the precise conditions under which the recursive computation of divided differences can be performed to HRA-these include strictly ordered nodes and a particular sign structure of the node values of the interpolated function.
In the research of algorithms that preserve HRA, a major step forward was given in the seminal work of Gasca and Peña for nonsingular totally positive matrices, since these can be written as a product of bidiagonal matrices [12].This factorization can be seen as a representation of the matrices exploiting their total positivity property to achieve accurate numerical linear algebra since, if it is provided to HRA, it allows solving several algebraic problems involving the collocation matrix of a given basis to HRA.In the last years, the search for bidiagonal decompositions of different totally positive bases has been a very active field of research [4][5][6][7][18][19][20].
The present work can be framed as a contribution to the above picture for the particular case of the well-known Newton basis and its collocation matrices.The conditions guaranteeing their total positivity are derived and a fast algorithm to obtain their bidiagonal factorization to HRA is provided.Under these conditions, the obtained bidiagonal decomposition can be applied to compute to HRA the singular values, the inverse, and also the solution of some linear systems of equations.As will be illustrated, the accurate resolution of these systems provides an alternative procedure to calculate divided differences to high relative accuracy.
In order to make this paper as self-contained as possible, Section 2 recalls basic concepts and results related to total positivity, high relative accuracy and interpolation formulae.Section 3 focuses on the recursive computation of divided differences providing conditions for their computation to high relative accuracy.In Section 4, the bidiagonal factorization of collocation matrices of Newton bases is derived and the analysis of their total positivity is performed.A fast algorithm for the computation of the bidiagonal factorization is provided in Section 5, where the numerical errors appearing in a floating-point arithmetic are also studied and a structured condition number for the considered matrices is deduced.Finally, Section 6 illustrates the numerical performed experimentation.

Notations and auxiliary results
Let us recall that a matrix is totally positive (respectively, strictly totally positive) if all its minors are nonnegative (respectively, positive).By Theorem 4.2 and the arguments of p.116 of [12], a nonsingular totally positive A ∈ R (n+1)×(n+1) can be written as follows, where n+1) , i = 1, . . ., n, are the totally positive, lower and upper triangular bidiagonal matrices described by (2) and D ∈ R (n+1)×(n+1) is a diagonal matrix with positive diagonal entries.
In [14], the bidiagonal factorization (1) of a nonsingular and totally positive A ∈ R (n+1)×(n+1) is represented by defining a matrix BD(A) = (BD(A) i,j ) 1≤i,j≤n+1 such that This representation will allow us to define algorithms adapted to the totally positive structure, providing accurate computations with A.
If the bidiagonal factorization (1) of a nonsingular and totally positive matrix A can be computed to high relative accuracy, the computation of its eigenvalues and singular values, the computation of A −1 and even the resolution of systems of linear equations Ax = b, for vectors b with alternating signs, can be also computed to high relative accuracy using the algorithms provided in [15].
Let (u 0 , . . ., u n ) be a basis of a space U of functions defined on I ⊆ R. Given a sequence of parameters t 1 < • • • < t n+1 on I, the corresponding collocation matrix is defined by Let P n (I) be the (n + 1)-dimensional space formed by all polynomials of degree not greater than n, in a variable defined on I ⊆ R, that is, P n (I) := span{1, t, . . ., t n }, t ∈ I.
Given nodes t 1 , . . ., t n+1 on I and a function f : I → R, we are going to address the Lagrange interpolation problem for finding p n ∈ P n (R) such that When considering the monomial basis (m 0 , . . ., m n ), with m i (t) = t i , for i = 0, . . ., n, the interpolant can be written as follows, and the coefficients c i , i = 1, . . ., n + 1, form the solution vector c = (c 1 , . . ., c n+1 ) T of the linear system V c = f, where f := (f (t 1 ), . . ., f (t n+1 )) T and V ∈ R (n+1)×(n+1) is the collocation matrix of the monomial basis at the nodes t i , i = 1, . . ., n + 1, that is, Let us observe that V is the Vandermonde matrix at the considered nodes and recall that Vandermonde matrices have relevant applications in linear interpolation and numerical quadrature (see for example [8] and [27]).In fact, for any increasing sequence of positive values, 0 < t 1 < • • • < t n+1 , the corresponding Vandermonde matrix V in ( 6) is known to be strictly totally positive (see Section 3 of [14]) and so, we can write In [14] or Theorem 3 of [17] it is shown that the Vandermonde matrix V admits a bidiagonal factorization of the form (1): where F i ∈ R (n+1)×(n+1) and G i ∈ R (n+1)×(n+1) , i = 1, . . ., n, are the lower and upper triangular bidiagonal matrices described by (2) with and D is the diagonal matrix whose entries are Using Koev's notation, this factorization of V can be represented through the matrix Moreover, it can be easily checked that the computation of BD(V ) does not require inaccurate cancellations and can be performed to high relative accuracy.
When considering interpolation nodes t 1 , . . ., t n+1 such that t i = t j for i = j, and the corresponding Lagrange polynomial basis (ℓ 0 , . . ., ℓ n ), with the Lagrange formula of the polynomial interpolant p n is Taking into account ( 13), ( 5) and ( 7), we have Let us note that identity (14) means that the Vandermonde matrix is the change of basis matrix between the (n + 1)-dimensional monomial and Lagrange basis of the polynomial space P n (R) corresponding to the considered interpolation nodes.
The Lagrange's interpolation formula ( 13) is usually considered for small numbers of interpolation nodes, due to certain shortcomings claimed such as: • The evaluation of the interpolant p n (t) requires O(n 2 ) flops.
• A new computation from scratch is needed when adding a new interpolation data pair.• The computations are numerically unstable.
Nevertheless, as explained in [2], the Lagrange formula can be improved taking into account the following identities, where Furthermore, since n i=0 ℓ i (t) = 1, for all t, the following barycentric formula can be derived for the Lagrange interpolant Let us observe that any common factor in the values ω j , i = 1, . . ., n + 1, can be cancelled without changing the value of the interpolant p n (t).This property is used in [2] to derive interesting properties of the barycentric formula ( 16) for the interpolant.

Accurate computation of divided differences
Instead of Lagrange's formulae ( 15) and ( 16), one can use the Newton form of the interpolant, which is obtained when the interpolant is written in terms of the Newton basis (w 0 , . . ., w n ) determined by the interpolation nodes t 1 , . . ., t n+1 , as follows, where [t 1 , . . ., t i ]f denotes the divided difference of the interpolated function f at the nodes t 1 , . . ., t i .If f is n-times continuously differentiable on [t 1 , t n+1 ], the divided differences [t 1 , . . ., t i ]f , i = 1, . . ., n + 1, can be obtained using the following recursion Note that the divided differences only depend on the interpolation nodes and, once computed, the interpolant ( 18) can be evaluated in O(n) flops per evaluation.For a given sequence The following result shows that the computation of d i,k using the recursion (19) satisfies the NIC condition for ordered sequences of nodes.Theorem 1.Let t 1 , . . ., t n+1 be nodes in strict increasing (respectively, decreasing) order and d 1 , . . ., d n+1 given values having alternating signs.Then, the values can be computed to high relative accuracy.
Proof.We proceed by induction on k.Under the considered hypotheses, the result follows for k = 0. Now, let us assume that the claim holds for k.Then, by ( 21), we can write and, by the induction hypothesis, deduce that the values d i+2,k and d i,k have different sign than d i+1,k .Now, taking into account that the nodes t i are in strict order, we derive that the elements d i,k+1 and d i+1,k+1 have alternating signs.On the other hand, the computation to high relative accuracy of the quantity d i,k can be guaranteed since it is computed as a quotient whose numerator is a sum of numbers of the same sign, and the subtractions in the denominator involve only initial data and so, will not lead to subtractive cancellations.
Note that the previous result provides the conditions on a function f for the computation to high relative accuracy of the coefficients of the Newton form (17) of its Lagrange polynomial interpolant.Corollary 1.Let t 1 , . . ., t n+1 be nodes in strict increasing (respectively, decreasing) order and f a function such that the entries of the vector (f (t 1 ), . . ., f (t n+1 )) can be computed to high relative accuracy and have alternating signs.Then, the divided differences [t i , . . ., t i+k ]f , k = 0, . . ., n, i = 1, . . ., n + 1 − k, can be computed to high relative accuracy using (19).Section 6 will illustrate the accuracy obtained using the recursive computation of divided differences and compare the results with those obtained through an alternative method proposed in the next section.

Total positivity and bidiagonal factorization of collocation matrices of Newton bases
Let us observe that, since the polynomial m i (t) = t i , i = 0, . . ., n, coincides with its interpolant at t 1 , . . ., t n+1 , taking into account the Newton formula (18) for the monomials m i , i = 0, . . ., n, we deduce that (m 0 , . . ., m n ) = (w 0 , . . ., w n )U, (22) where the change of basis matrix U = (u i,j ) 1≤i,j≤n+1 is upper triangular and satisfies On the other hand, the collocation matrix of the Newton basis (w 0 , . . ., w n ) (17) at the interpolation nodes t 1 , . . ., t n+1 is a lower triangular matrix L = (l i,j ) 1≤i,j≤n+1 whose entries are Taking into account ( 14) and ( 22), we obtain the following Crout factorization of Vandermonde matrices at nodes t i , i = 1, . . ., n + 1, with where L is the lower triangular collocation matrix of the Newton basis (w 0 , . . ., w n ) and U is the upper triangular change of basis matrix satisfying (22).
The following result deduces the bidiagonal factorization of the collocation matrix L of the Newton basis.Theorem 2. Given interpolation nodes t 1 , . . ., t n+1 , with t i = t j for i = j, let L ∈ R (n+1)×(n+1) be the collocation matrix described by (24) of the Newton basis (17).
where F i ∈ R (n+1)×(n+1) , i = 1, . . ., n, are lower triangular bidiagonal matrices whose structure is described by (2) and their off-diagonal entries are and D ∈ R (n+1)×(n+1) is the diagonal matrix D = diag(d 1,1 , . . ., d n+1,n+1 ) with Proof.From identities ( 8), ( 11) and ( 25), we have where F i ∈ R (n+1)×(n+1) and G i ∈ R (n+1)×(n+1) , i = 1, . . ., n, are the lower and upper triangular bidiagonal matrices described by (2) with and D is the diagonal matrix whose entries are d i,i = i k=1 t i − t k for i = 1, . . ., n + 1.From Theorem 3 of [21], the upper triangular matrix U in (23) satisfies where G i ∈ R (n+1)×(n+1) , i = 1, . . ., n, are upper triangular bidiagonal matrices whose structure is described by ( 2) and their off-diagonal entries are On the other hand, by (25), we also have, and conclude Taking into account Theorem 2, the bidiagonal factorization of the matrix L can be stored by the matrix BD(L) with The analysis of the sign of the entries in (30) will allow us to characterize the total positivity property of the collocation matrix of Newton bases in terms of the ordering of the nodes.This fact is stated in the following result.Corollary 2. Given interpolation nodes t 1 , . . ., t n+1 , with t i = t j for i = j, let L ∈ R (n+1)×(n+1) be the collocation matrix (24) of the Newton basis (17) and J the diagonal matrix J := diag((−1) i−1 ) 1≤i≤n+1 .
a) The matrix L is totally positive if and only if t 1 < • • • < t n+1 .Moreover, in this case, L and the matrix BD(L) in (30) can be computed to HRA. b) The matrix L J := LJ is totally positive if and only if t 1 > • • • > t n+1 .Moreover, in this case, the matrix BD(L J ) can be computed to HRA.
Furthermore, the singular values and the inverse matrix of L, as well as the solution of linear systems Ld = f , where the entries of f = (f 1 , . . ., f n+1 ) T have alternating signs, can be performed to HRA.
, the entries m i,j in (27) and d i,i in ( 28) are all positive and we conclude that the diagonal matrix D and the bidiagonal matrix factors F i , i = 1, . . ., n, are totally positive.Taking into account that the product of totally positive matrices is a totally positive matrix (see Theorem 3.1 of [1]), we can guarantee that L is totally positive.Conversely, if L is totally positive then the entries m i,j in (27) and d i,i in (28) take all positive values.Moreover, since we derive by induction that t i − t i−1 > 0 for i = 2, . . ., n + 1.
On the other hand, for increasing sequences of nodes, the subtractions in the computation of the entries m i,j and p i,i involve only initial data and so, will not lead to subtractive cancellations.So, the computation to high relative accuracy of the above mentioned algebraic problems can be performed to high relative accuracy using the matrix representation (30) and the Matlab commands in Koev's web page (see Section 3 of [6]).
For decreasing nodes, the computation to high relative accuracy of L j , its singular values, the inverse matrix L −1 J and the resolution of L J c = f , where f = (f 1 , . . ., f n+1 ) T has alternating signs can be deduced in a similar way to the increasing case.Finally, since J is a unitary matrix, the singular values of L J coincide with those of L. Similarly, taking into account that we can compute L −1 accurately.Finally, if we have a linear system of equations Ld = f , where the elements of f = (f 1 , . . ., f n+1 ) T have alternating signs, we can solve to high relative accuracy the system L J c = f and then obtain d = Jc.
Let us observe that the factorization ( 26) of the collocation matrix of the Newton basis corresponding to the nodes t 1 , . . ., t n+1 can be used to solve Lagrange interpolation problems.The Newton form of the Lagrange interpolant can be written as follows with d i := [t 1 , . . ., t i ]f , i = 1, . . ., n + 1.The computation of the divided differences, which are usually obtained through the recursion in (19), can be alternatively obtained by solving the linear system Ld = f, with d := (d 1 , . . ., d n+1 ) T and f := (f 1 , . . ., f n+1 ) T , using the Matlab function TNSolve in Koev's web page [16] and taking the matrix form (3) of the bidiagonal decomposition of L as input argument.Taking into account Corollary 2, BD(L) (for increasing nodes) and BD(L J ) (for decreasing nodes) can be computed to high relative accuracy.Then, if the elements of the vector f are given to high relative accuracy, the computation of the vector d can also be achieved to high relative accuracy.
Finally, the numerical experimentation illustrated in Section 6 will compare the vectors of divided differences obtained using the provided bidiagonal factorization, the recurrence (19) and, finally, the Matlab command \ for the resolution of linear systems.

Error analysis and perturbation theory
Let us consider the collocation matrix L ∈ R (n+1)×(n+1) of the Newton basis (17) at the nodes t 1 < • • • < t n+1 (see (24)).Now, we present a procedure for the efficient computation of BD(L), that is, the matrix representation of the bidiagonal factorization (1) of L.
Algorithm 1: Computation of m i,j for i :=2 to n+1 m i,1 := 1 for j :=2 to i-1 In the sequel, we analyze the stability of Algorithms 1 and 2 under the influence of imprecise computer arithmetic or perturbed input data.For this purpose, let us first introduce some standard notations in error analysis.
For a given floating-point arithmetic and a real value a ∈ R, the computed element is usually denoted by either fl(a) or by â.In order to study the effect of rounding errors, we shall use the well-known models where u denotes the unit roundoff and op any of the elementary operations +, −, ×, / (see [13], p. 40 for more details).Following [13], when performing an error analysis, one usually deals with quantities θ k such that for a given k ∈ N with ku < 1. Taking into account, Lemmas 3.3 and 3.4 of [13], the following properties of the values (36) hold: For example, statement a) above means that for any given two values θ k and θ j , bounded by γ k and γ j , respectively, there exists a number θ k+j , bounded by γ k+j , such that the above identity holds.Further use of the previous symbols must be intended in this respect.
The following result analyzes the numerical error due to imprecise computer arithmetic in Algorithms 1 and 2, showing that both compute the bidiagonal factorization (1) of L accurately in a floating point arithmetic.Theorem 3.For n > 1, let L ∈ R (n+1)×(n+1) be the collocation matrix (24) of the Newton basis (17) be the matrix form of the bidiagonal decomposition (1) of L and fl(BD(L)) = (fl(b i,j )) 1≤i,j≤n+1 be the matrix computed with Algorithms 1 and 2 in floating point arithmetic with machine precision u.Then Proof.For i > j, b i,j can be computed using Algorithm 1. Accumulating relative errors as proposed in [13], we can easily derive Analogously, for i = j, b i,i can be computed using Algorithm 2 and we have Finally, since 2n − 1 ≤ 4n − 5 for n > 1, the result follows.Now, we analyze the effect on the bidiagonal factorization of the collocation matrices of Newton bases due to small relative perturbations in the interpolation nodes t i , i = 1, . . ., n + 1.Let us suppose that the perturbed nodes are We define the following values that will allow us to obtain an appropriate structured condition number, in a similar way to other analyses performed in [7,14,22,23,25,26]: where rel gap t >> θ.Theorem 4. Let L and L ′ be the collocation matrices (24) of the Newton basis (17) at the nodes , respectively, with t ′ i = t i (1 + δ i ), for i = 1, . . ., n + 1, and |δ i | ≤ θ i .Let BD(L) = (b i,j ) 1≤i,j≤n+1 and BD(L ′ ) = (b ′ i,j ) 1≤i,j≤n+1 be the matrix form of the bidiagonal factorization of L and L ′ , respectively.Then Proof.First, let us observe that Accumulating the perturbations in the style of Higham (see Chapter 3 of [13]), we derive Analogously, for the diagonal entries p i we have Finally, using (42) and ( 43), the result follows.
Formula (41) can also be obtained using Theorem 7.3 of [25] where it is shown that small relative perturbations in the nodes of a Cauchy-Vandermonde matrix produce only small relative perturbations in its bidiagonal factorization.Let us note that the entries m i,j and p i,i of the bidiagonal decomposition of collocation matrices of Newton bases at distinct nodes coincide with those of Cauchy-Vandermonde matrices with l = 0. Finally, let us note that quantity (2n − 2)κθ can be seen as an appropriated structured condition number for the mentioned collocation matrices of the Newton bases (17).

Numerical experiments
In order to give a numerical support to the theoretical methods discussed in the previous sections, we provide a series of numerical experiments.In all cases, we have considered ill-conditioned nonsingular collocation matrices L of (n + 1)dimensional Newton bases (17) with equidistant increasing or decreasing nodes for Table 1 Relative errors of the approximations to the solution of the linear system Ld = f , for equidistant nodes in a unit-length interval in increasing and decreasing order.Computation of the coefficients of the Newton form for a function of constant sign.As explained in the previous case, when determining the interpolating polynomial coefficients of the Newton form, to guarantee high relative accuracy the vector (f 1 , . . ., f n+1 ) is required to have alternating signs.However, other sign structures can be addressed by the methods discussed in this work, although HRA cannot be assured in these cases.To illustrate the behavior of both the divided differences recurrence and the bidiagonal decomposition approach in such scenario, numerical experiments for the classical Runge function 1/(1 + 25x 2 ) with equidistant nodes in the [−2, 2] interval were performed.Relative errors of the computed coefficients of the Newton form are gathered for several n in Table 2, comparing the performance of the divided difference recurrence (19), the function TNSolve applied to the bidiagonal factorization proposed in Section 4 and the standard \ Matlab command.As can be seen, good accuracy is achieved for small enough values of n, while relative errors for increasing n behave considerably worse with the standard \ Matlab command than with both the divided differences recurrence and the bidiagonal decomposition approaches, which achieve similar precision.Computation of singular values of L to HRA.In the second numerical experiment, the precision achieved by two methods to compute the lowest singular values of L is analyzed.On the one hand, we computed the lowest singular value of L by means of the routine TNSingularValues, using BD(L) for increasing and BD(L J ) for decreasing order.Notice that, for nodes in decreasing order, the singular values of L J coincide with those of L since J is unitary.On the other hand, the lowest singular value of L was computed with the Matlab command svd.
As expected, when the bidiagonal decomposition approach is used, the singular values are computed to high relative accuracy for every dimension n, despite the illconditioning of the matrices and in contrast with the standard Matlab procedure.Computation of inverse of L to HRA.Finally, to show another application of the bidiagonal decompositions presented in this work, the inverse of the considered collocation matrices were computed.We used two different procedures: the Matlab inv command and the function TNInverseExpand with our bidiagonal decompositions BD(L) for increasing and BD(L J ) for decreasing order.To compute the inverse for the decreasing order case to HRA, note that first the inverse of L J is obtained and then L −1 is recovered with L −1 = JL −1 J .It should be pointed out that formula (13) in [3] allows to compute L −1 to high relative accuracy for any order of the nodes.Displayed in Table 4, the results of these numerical experiments show that our bidiagonal decomposition approach preserves high relative accuracy for any of the values of n tested, in stark contrast with the standard Matlab routine.

Table 2
Relative errors of the approximations to the solution of the linear system Ld = f , for the classical Runge function (1 + 25x 2 ) −1 , with equidistant nodes in increasing order in [−2, 2].

Table 3
Relative errors when computing the lowest singular value of L for equidistant nodes in an interval of unit-length in increasing and decreasing order.t 1 < • • • < t n+1 t 1 > • • • > t n+1

Table 4
Relative errors when computing the inverse of collocation matrices of the Newton basis with equidistant nodes in a unit-length interval in increasing and decreasing order.