Abstract
The barycentric forms of polynomial and rational interpolation have recently gained popularity, because they can be computed with simple, efficient, and numerically stable algorithms. In this paper, we show more generally that the evaluation of any function that can be expressed as \(r(x)=\sum _{i=0}^n a_i(x) f_i\big /\sum _{j=0}^m b_j(x)\) in terms of data values \(f_i\) and some functions \(a_i\) and \(b_j\) for \(i=0,\ldots ,n\) and \(j=0,\dots ,m\) with a simple algorithm that first sums up the terms in the numerator and the denominator, followed by a final division, is forward and backward stable under certain assumptions. This result includes the two barycentric forms of rational interpolation as special cases. Our analysis further reveals that the stability of the second barycentric form depends on the Lebesgue constant associated with the interpolation nodes, which typically grows with n, whereas the stability of the first barycentric form depends on a similar, but different quantity, that can be bounded in terms of the mesh ratio, regardless of n. We support our theoretical results with numerical experiments.
Introduction
Given \(n+1\) distinct interpolation nodes \(x_0,\ldots ,x_n\) with associated data \(f_0,\ldots ,f_n\), the classical interpolation problem consists in finding a function \(r:\mathbb {R}\rightarrow \mathbb {R}\) that interpolates \(f_i\) at \(x_i\), that is,
In the case of polynomial interpolation of degree at most n, this problem has a unique solution, which can be expressed in Lagrange form as
While this form is advantageous for theoretical analysis, its evaluation requires \(O(n^2)\) operations and can be numerically unstable. It is advisable to consider instead the first barycentric form of r,
with the Lagrange weights \(w_i\) defined as
The first barycentric form is more efficient than the Lagrange form, as it can be evaluated in O(n) operations, after computing the \(w_i\), which are independent of x, in \(O(n^2)\) operations in a preprocessing step. Higham [1] further shows that this evaluation is backward stable with respect to perturbations of the data \(f_i\). Another means of evaluating r is given by the second barycentric form of r,
which can be derived from (2) by noticing that \(1=\ell (x)\sum _{i=0}^n\frac{w_i}{xx_i}\). Evaluating this formula also requires O(n) operations, but it comes with the advantage that the \(w_i\) can be rescaled by a common factor to avoid underflow and overflow [2]. Moreover, the second barycentric form is forward stable, as long as the Lebesgue constant associated with the interpolation nodes \(x_i\) is small [1], which is the case, for example, for Chebyshev nodes of the first and the second kind, but not for equidistant nodes.
In the case of rational interpolation, the interpolation problem (1) no longer has a unique solution, but Berrut and Mittelmann [3] show that every rational interpolant of degree at most n can be expressed in the second barycentric form (4) for a specific choice of weights \(w_i\). Vice versa, Schneider and Werner [4] note that for any set of nonzero weights \(w_i\), the function r in (4) is a rational interpolant of degree at most n. An important subset of these barycentric rational interpolants are those that do not have any poles in \(\mathbb {R}\). This is obviously true for the Lagrange weights in (3), but also for the Berrut weights [5]
and, assuming the interpolation nodes to be in ascending order, that is, \(x_0<\dots <x_n\), for the family of weights with parameter \(d\in \{0,\ldots ,n\}\),
proposed by Floater and Hormann [6]. Note that this family includes the Berrut and the Lagrange weights as special cases for \(d=0\) and \(d=n\). For the weights in (6), Floater and Hormann further observe that the barycentric rational interpolant in (4) with the weights (6) can also be written as
where
As the formula in (7) simplifies to (2) for \(d=n\), we refer to it as the first barycentric form of the corresponding rational interpolant. Note that the first and the second form are identical for Berrut’s interpolant, that is, for \(d=0\).
The result of Higham [1] can be extended to show that the evaluation of the second barycentric form (4) is forward stable, not only in the case of polynomial interpolation, but for general barycentric rational interpolants, provided that the weights \(w_i\) can be computed with a forward stable algorithm and that the corresponding Lebesgue constant is small [7]. For Berrut’s rational interpolant with weights in (5), this is the case for all wellspaced interpolation nodes [8], including equidistant and Chebyshev nodes. For the family of barycentric rational interpolants with weights in (6), the Lebesgue constant is known to grow logarithmically in n, for any constant \(d>0\) and equidistant [9] as well as quasiequidistant [10] nodes, and the formula in (6) turns out to be a forward stable means of computing the weights \(w_i\) [11].
In this paper, we further generalize the proof of Salazar Celis [7], such that it can also be used for proving the forward stability of the first barycentric form (7), with two important changes (Sect. 3). On the one hand, the result relies on the fact that not only the \(w_i\), but also the \(\lambda _i\) can be evaluated with a forward stable algorithm. This, however is indeed the case, both for the formula in (8), which requires \(O(d(nd))\) operations for \(0<d<n\), as well as a more efficient, but slightly less stable formula that gets by with O(n) operations (Sect. 4). On the other hand, the Lebesgue constant must be replaced by a similar quantity that depends on the functions \(\lambda _i\), for which we prove that it is at most on the order of \(O(\mu ^d)\), where \(\mu \) is the mesh ratio of the interpolation nodes (Sect. 6). Moreover, we show that a more efficient formula [12] for computing the weights in (6) is forward stable, too (Sect. 4).
Regarding backward stability, Mascarenhas and Camargo [13] show that the second barycentric form is backward stable under the same assumptions, namely forward stable weights \(w_i\) and small Lebesgue constant. Moreover, Camargo [11] proves that the first barycentric form is backward stable, as long as the denominator in (7) is computed with a special algorithm in O(nd) operations.
In this paper, we derive a substantially different approach that can be used to show the backward stability of both barycentric forms (Sect. 5). For the second barycentric form, our result provides an upper bound on the perturbation of the data \(f_i\) that is smaller than the upper bound by Mascarenhas and Camargo [13]. For the first barycentric form, our upper bound is larger than the one found by Camargo [11], but it comes with the advantage of holding for a more efficient way of computing the denominator in (7) in O(n) operations, which is based on our new O(n) algorithm for evaluating the \(\lambda _i\) (Sect. 4).
It is important to note that our results hold under the assumption that the input values to the algorithm, \(x_i\), \(f_i\), and x, are given as floating point numbers, so they do not introduce any additional error when we compute both forms (4) and (7). Our stability analysis does not cover errors that result from initially rounding the given values to floating point numbers.
Our numerical experiments (Sect. 7) confirm that this leads to an evaluation of the first barycentric form (7) of the rational interpolant with weights in (6), which is considerably faster than the algorithm proposed by Camargo [11], especially for larger d, at the price of only marginally larger forward errors. Evaluating the interpolant using the second barycentric form (4) is even faster and can be as stable, but may also result in significantly larger errors for certain choices of interpolation nodes. However, we also report a case in which the second form is stable and the first is not.
Preliminaries
Following Trefethen [14], a general problem g can be seen as a function \(g:U \rightarrow V\) from a normed vector space \((U,{\Vert \cdot \Vert }_U)\) of data to a normed vector space \((V,{\Vert \cdot \Vert }_V)\) of solutions and a numerical algorithm for solving the problem on a computer as another function \(\hat{g}:U\rightarrow V\) that approximates g.
For our analysis, we consider a computer that uses a set \(\mathbb {F}\) of floating point numbers with the corresponding machine epsilon \(\epsilon \) and let \({\textrm{fl}}:\mathbb {R}\rightarrow \mathbb {F}\) be the rounding function that maps each \(x\in \mathbb {R}\) to the closest floating point approximation \({\textrm{fl}}(x)\in \mathbb {F}\). Moreover, we denote by \(\circledast \) the floating point analogue of the arithmetic operation \(*\in \{+,,\times ,\div \}\), that is,
for \(x,y\in \mathbb {F}\), and recall [14, Lecture 13] that \({\textrm{fl}}\) and \(\circledast \) introduce a relative error of size at most \(\epsilon \). That is, for all \(x\in \mathbb {R}\) there exists some \(\delta \in \mathbb {R}\) with \({\delta }<\epsilon \), such that
and likewise
for all \(x,y\in \mathbb {F}\), which follows immediately from (9). We further note that \({\textrm{fl}}(x)=x\) for all \(x\in \mathbb {F}\), so that multiplying a floating point number by \({(1)}^i\) or taking its absolute value does not entail any rounding error.
In order to analyse and to compare the quality of numerical algorithms, the usual quantities to consider are the absolute forward error \({\Vert \hat{g}(u)g(u)\Vert }_V\) and the relative forward error \({\Vert \hat{g}(u)g(u)\Vert }_V/{\Vert g(u)\Vert }_V\). The algorithm \(\hat{g}\) is called accurate or forward stable, if
for all \(u\in U\) with \(g(u)\ne 0\) and backward stable at \(u\ne 0\), if
where the notation \(x=O(\epsilon )\) means that there exists some positive constant C, such that \({x}\le C\epsilon \) as \(\epsilon \rightarrow 0\). In the case of forward stability, this constant usually depends on the relative condition number
of the problem g at u and on the dimensions of U and V.
In this paper, we analyse algorithms for evaluating barycentric rational interpolants, based on the formulas (4) and (7). Following Higham [1], we assume the evaluation point \(x\in \mathbb {F}\) and the interpolation nodes \({\varvec{x}}=(x_0,\dots ,x_n)\in \mathbb {F}^{n+1}\) to be fixed and prove the forward and backward stability with respect to the data \({\varvec{f}}=(f_0,\dots ,f_n)\in \mathbb {F}^{n+1}\). More precisely, we solve the problem
by a numerical algorithm
and we show that for any choice of x, \({\varvec{x}}\), and \({\varvec{f}}\), the relative forward error is bounded from above (Sect. 3),
and that there exists some perturbed data \(\varvec{\hat{f}}\) with
such that \(\hat{r}(x;{\varvec{x}},{\varvec{f}})=r(x;{\varvec{x}},\varvec{\hat{f}})\), where the additional arguments of r are used to emphasize the dependence of the interpolant on the interpolation nodes and the data (Sect. 5). In both cases, \(\epsilon \) is assumed to be sufficiently small, since this is enough to guarantee that the errors are \(O(\epsilon )\). We will see that both constants \(C_1\) and \(C_2\) depend linearly on n and a quantity \(\beta (x)\), which is independent of \({\varvec{f}}\) and different for the first and the second barycentric form. It turns out that this quantity is usually smaller when the first barycentric form is used to evaluate r (Sect. 7). As mentioned above, the constant \(C_1\) further depends on the condition number of the problem, and more specifically on the componentwise relative condition number [15]
For the derivation of these upper bounds, we shall frequently use the following basic facts:

1.
By Taylor expansion,
$$\begin{aligned} \frac{1}{1+y} = \sum _{k=0}^\infty {(1)}^k y^k \end{aligned}$$for any \(y\in \mathbb {R}\) with \({y}<1\). Consequently, if \(y=O(\epsilon )\), then
$$\begin{aligned} \frac{1}{1+y} = 1  y + O(\epsilon ^2). \end{aligned}$$(12)Moreover, for any \(\delta \in \mathbb {R}\) with \({\delta } \le \epsilon \), there exists some \(\delta '\in \mathbb {R}\) with \({\delta '}\le \epsilon +O(\epsilon ^2)\), such that
$$\begin{aligned} \frac{1}{1+\delta } = 1 + \delta '. \end{aligned}$$(13)This observation is useful for “moving” the perturbation (10) caused by a floating point operation from the denominator to the numerator.

2.
For any \(\delta _1,\dots ,\delta _m\in \mathbb {R}\) with \({\delta _i}\le C_i\epsilon \) for some \(C_i>0\), \(i=0,\dots ,n\), there exists some \(\delta \in \mathbb {R}\) with \({\delta }\le C\epsilon +O(\epsilon ^2)\), where \(C=\sum _{i=1}^m C_i\), such that
$$\begin{aligned} \prod \limits _{j=1}^{m}(1+\delta _j) = 1 + \delta . \end{aligned}$$(14)We use this observation to gather the perturbations caused by computing the product of m terms into a single perturbation^{Footnote 1}.

3.
For any \(t_0,\dots ,t_m\in \mathbb {F}\), there exist some \(\varphi _0,\dots ,\varphi _m\in \mathbb {R}\) with \({\varphi _0},\dots ,{\varphi _m}\le m\epsilon +O(\epsilon ^2)\), such that
$$\begin{aligned} {\textrm{fl}}\left( \sum _{i=0}^m t_i \right) = ( \cdots (( t_0 \oplus t_1) \oplus t_2) \cdots \oplus t_m) = \sum _{i=0}^m t_i (1+\varphi _i). \end{aligned}$$(15)This follows from the previous observation, and we use it to estimate the rounding error introduced by simple iterative summation of \(m+1\) floating point numbers.
Forward stability
For analysing the relative forward error of barycentric rational interpolation, we first observe that (4) and (7) can both be written in the common form
where \(a_i(x)=w_i/(xx_i)\) for both forms, while \(m=n\) and \(b_j(x)=a_j(x)\) for the second form and \(m=nd\) and \(b_j(x)=\lambda _j(x)\) for the first form. Next, we define the functions
and
Assuming now that we have forward stable algorithms for computing \(a_i(x)\) as \(\hat{a}_i(x)\) and \(b_j(x)\) as \(\hat{b}_j(x)\), we can derive a general bound on the relative forward error for the function r in (16).
Theorem 1
Suppose that there exist \(\alpha _0,\dots ,\alpha _n\in \mathbb {R}\) with
and \(\beta _0,\dots ,\beta _m\in \mathbb {R}\) with
for some constants A and B. Then, assuming \({\varvec{f}}\in \mathbb {F}^{n+1}\), the relative forward error of r in (16) satisfies
for \(\epsilon \) small enough.
Proof
We first notice that \(\hat{r}\) is given by
where \(\delta ^{\times }_i\), \(\varphi ^N_i\), \(\varphi ^D_j\) and \(\delta ^{\div }\) are the relative errors introduced respectively by the product \(\hat{a}_i(x)f_i\), the sums in the numerator and the denominator, and the final division. It then follows from (10) that \({\delta ^{\times }_i},{\delta ^{\div }} \le \epsilon \), while from (15) we have \({\varphi ^N_i}\le n\epsilon +O(\epsilon ^2)\) and \({\varphi ^D_j}\le m\epsilon +O(\epsilon ^2)\). By (14), there exist some \(\eta _i,\mu _j\in \mathbb {R}\) with
such that
Therefore,
Since, by the triangle inequality,
and
we can use (12) to express this ratio as
and obtain the relative forward error of r as
The upper bound in (19) then follows immediately by using again (22) and (23). \(\square \)
While Theorem 1 holds for any function r that can be expressed as in (16), we shall now focus on the special cases for the two different forms of the barycentric rational interpolant. For the second barycentric form, the only assumption we need is that the weights \(w_i\) can be computed as \(\hat{w}_i\) with a forward stable algorithm. Moreover, we recall the definition of the Lebesgue function of the barycentric rational interpolant [9] as
Corollary 1
Assume that there exist \(\psi _0,\dots ,\psi _n\in \mathbb {R}\) with
for some constant W. Then, assuming \({\varvec{f}}\in \mathbb {F}^{n+1}\), the relative forward error of the second barycentric form in (4) satisfies
for \(\epsilon \) small enough.
Proof
We first notice that \(a_i(x)=w_i/(xx_i)\) can be computed with one subtraction and one division, so that, by (10), (13), (14), and (24),
for some \(\alpha _i\in \mathbb {R}\) with \({\alpha _i}\le (2+W)\epsilon +O(\epsilon ^2)\). Hence, the constants in (19) are \(A=2+W\) and further \(B=2+W\), because \(b_j(x)=a_j(x)\) in case of the second barycentric form. From the latter, it also follows immediately that \(\beta (x)\) in (18) is equal to \(\Lambda _n(x;{\varvec{x}})\) in this case, and it only remains to show that \(\alpha (x;{\varvec{f}})\) in (17) is equal to \(\kappa (x;{\varvec{x}},{\varvec{f}})\) in (11). To this end, we first use the triangle inequality to see that for any \(\delta >0\) and any \({\varvec{h}}=(h_0,\dots ,h_n)\in \mathbb {R}^{n+1}\setminus \{{\varvec{0}}\}\) with \({h_i}\le \delta {f_i}\),
where equality is attained for \({\varvec{h}}\) with \(h_i=\delta {\textrm{sign}}(a_i(x)){f_i}\), \(i=0,\dots ,n\). Dividing both sides of this inequality by H and \({\bigl \sum _{i=0}^n a_i(x)f_i\bigl }\), taking the supremum over all admissible \({\varvec{h}}\) and the limit \(\delta \rightarrow 0\), gives \(\kappa (x;{\varvec{x}},{\varvec{f}})=\alpha (x;{\varvec{f}})\). \(\square \)
For the first barycentric form, we additionally need to assume that the \(\lambda _i(x)\) can be computed with a forward stable algorithm as \(\hat{\lambda }_i(x)\), and we note that \(\beta (x)\) in (18) is equal to
in this case.
Corollary 2
Assume that the weights \(w_0,\dots ,w_n\) can be computed as in (24) and that there exist \(\gamma _0,\dots ,\gamma _{nd}\in \mathbb {R}\) with
for some constant C. Then, assuming \({\varvec{f}}\in \mathbb {F}^{n+1}\), the relative forward error of the first barycentric form in (7) satisfies
for \(\epsilon \) small enough.
Proof
As the numerator of the first and second barycentric form are identical, the only difference to the proof of Corollary 1 is that \(B=C\) in (19), because \(b_j(x)=\lambda _j(x)\) and \(m=nd\). \(\square \)
While upper bounds for the Lebesgue function \(\Lambda _n(x;{\varvec{x}})\) can be found in the literature [8, 9], we are unaware of any previous work bounding the function \(\Gamma _d(x;{\varvec{x}})\), and we derive such an upper bound in Sect. 6.
Computing the weights \(w_i\) and evaluating the functions \(\lambda _i\)
It remains to work out the constants W and C, related to the computation of the weights \(w_i\) in (4) and the evaluation of the functions \(\lambda _i\) in (7). In particular, we study the error propagation that occurs in the implementation of different algorithms and further analyse them in terms of computational cost.
Regarding the \(w_i\), it was shown by Higham [1] that the Lagrange weights in (3) can be computed stably with \(W=2n\) in (24), and the Berrut weights in (5) can be represented exactly in \(\mathbb {F}\), so that \(W=0\). The same holds for the weights in (6) if the interpolation nodes are equidistant, because they simplify to the integers
in this special case [6]. For the general case, Camargo [11, Lemma 1] shows that \(W=3d\), if the \(w_i\) are computed with a straightforward implementation of the formula in (6). While this construction requires \(O(nd^2)\) operations, Hormann and Schaefer [12] suggest an improved O(nd) pyramid algorithm, which turns out to have the same precision. Their algorithm starts from the values
and iteratively computes
for \(l=d1,d2,\dots ,0\), tacitly assuming \(v^l_i=0\) for \(i<0\) and \(i>nl\). They show that the resulting values \(v^0_i\) are essentially the weights \(w_i\) in (6), up to a factor of \({(1)}^{id}\).
Lemma 1
For any \(x_0,\dots ,x_n\in \mathbb {F}\), there exist \(\phi ^0_0,\dots ,\phi ^0_n\in \mathbb {R}\) with \({\phi ^0_0},\dots ,{\phi ^0_n}\le W\epsilon +O(\epsilon ^2)\) for \(W=3d\), such that the \(v^0_i\) in (30) satisfy
Proof
The statement is a special case of the more general observation that there exists for any \(l=d,d1,\dots ,0\) and \(i=0,\dots ,nl\) some \(\phi ^l_i\in \mathbb {R}\) with \({\phi ^l_i}\le 3(dl)\epsilon +O(\epsilon ^2)\), such that \(\hat{v}^l_i=v^l_i(1+\phi ^l_i)\), which can be shown by induction over l. The base case \(l=d\) follows trivially from (29). For the inductive step from \(l+1\) to l, we conclude from (10), (13), and (14), that \(\hat{v}^l_i\), computed with the formula in (30), satisfies
for some \(\rho _1,\rho _2\in \mathbb {R}\) with \({\rho _1},{\rho _2}\le 3\epsilon +O(\epsilon ^2)\), since both terms are affected by one subtraction, one division, and one sum. By induction hypothesis and (14), we can then assume the existence of some \(\sigma _1,\sigma _2\in \mathbb {R}\) with \({\sigma _1},{\sigma _2}\le 3(dl)\epsilon +O(\epsilon ^2)\), such that
and the intermediate value theorem further guarantees that
for some \(\phi ^l_i\in [\min (\sigma _1,\sigma _2),\max (\sigma _1,\sigma _2)]\) with \({\phi ^l_i}\le 3(dl)\epsilon +O(\epsilon ^2)\). \(\square \)
Let us now focus on the functions \(\lambda _i\) that appear in the barycentric formula (7) and first study the error propagation when computing them straightforwardly, following the formula in (8).
Lemma 2
For any \(x\in \mathbb {F}\) and \(x_0,\dots ,x_n\in \mathbb {F}\), there exist \(\theta _0,\dots ,\theta _{nd}\in \mathbb {R}\) with \({\theta _0},\dots ,{\theta _{nd}}\le C\epsilon +O(\epsilon ^2)\) for \(C=2d+2\), such that the \(\lambda _i(x)\) in (8) satisfy
Proof
Since computing \(\hat{\lambda }_i(x)\) requires \(d+1\) subtractions, d products, and one division, the result follows directly from (10), (14), and (13). \(\square \)
Evaluating \(\lambda _i\) in this way clearly has a computational cost of O(d) for \(d>0\) and O(1) for \(d=0\), so that computing all \(\lambda _i(x)\) requires \(O(d(nd))\) operations for \(0<d<n\) and O(n) operations for \(d=0\) and \(d=n\). However, for \(0<d<n\) this can be improved by exploiting the fact that \(\lambda _i(x)\) and \(\lambda _{i+1}(x)\) have d common factors in the denominator, which in turn suggests to first compute the “central” \(\lambda _m(x)\) for \(m=\left\lfloor \frac{nd}{2} \right\rfloor \) as above and then the remaining \(\lambda _i(x)\) iteratively as
Computing all \(\lambda _i(x)\) this way requires only O(n) operations, but it comes at the price of a likely reduced precision.
Lemma 3
For any \(x\in \mathbb {F}\) and \(x_0,\dots ,x_n\in \mathbb {F}\), there exist \(\zeta _0,\dots ,\zeta _{nd}\in \mathbb {R}\) with \({\zeta _0},\dots ,{\zeta _{nd}}\le C\epsilon +O(\epsilon ^2)\) for \(C=2n+4\), such that the \(\lambda _i(x)\) in (31) satisfy
Proof
By Lemma 2, we know that \(\hat{\lambda }_m(x)=\lambda _m(x)(1+\zeta _m)\) with \({\zeta _m}\le (2d+2)\epsilon +O(\epsilon ^2)\). Each step in (31) involves two subtractions, one division, and one product, and therefore introduces a perturbation of \((1+\delta )\) with \({\delta }\le 4\epsilon +O(\epsilon ^2)\), and these perturbations accumulate during the iteration. Since the number of steps is at most \((nd+1)/2\), the overall perturbation for each \(\lambda _i(x)\) is at most \((1+\zeta _i)\) with \({\zeta _i}\le [(2d+2)+4(nd+1)/2]\epsilon +O(\epsilon ^2)\). \(\square \)
Backward stability
Similar to how we established the forward stability of both barycentric forms in a unified way in Sect. 3, we can prove the backward stability in general for the function r in (16) and then derive upper bounds on the perturbation of the data for both forms as special cases.
Theorem 2
Suppose that there exist \(\alpha _0,\dots ,\alpha _n\in \mathbb {R}\) with
and \(\beta _0,\dots ,\beta _m\in \mathbb {R}\) with
for some constants A and B. Then there exists for any \({\varvec{f}}\in \mathbb {F}^{n+1}\) some \(\varvec{\hat{f}}\in \mathbb {F}^{n+1}\) with
for \(\epsilon \) small enough, such that the numerical evaluation of r in (16) satisfies \(\hat{r}(x;{\varvec{f}})=r(x;\varvec{\hat{f}})\).
Proof
Starting from (21), with the \(\eta _i\) and \(\mu _j\) satisfying (20), we get, again with the help of (12),
where
By (23), this means that there exist some \(\xi _0,\dots ,\xi _n\in \mathbb {R}\) with \({\xi _0},\dots ,{\xi _n}\le (m+B)\max _x\beta (x)\epsilon +O(\epsilon ^2)\), such that
and by (14) and (20) we can further assume the existence of some \(\varphi _0,\dots ,\varphi _n\) with \({\varphi _0},\dots ,{\varphi _n}\le (n+2+A)\epsilon +(m+B)\max _x\beta (x)\epsilon +O(\epsilon ^2)\), such that
The statement then follows directly from
\(\square \)
The special cases of Theorem 2 for the two different forms of the barycentric rational interpolant then follow with the same reasoning as in Sect. 3.
Corollary 3
Assume that the weights \(w_0,\dots ,w_n\) can be computed as in (24). Then there exists for any \({\varvec{f}}\in \mathbb {F}^{n+1}\) some \(\varvec{\hat{f}}\in \mathbb {F}^{n+1}\) with
for \(\epsilon \) small enough, such that the second barycentric form in (4) satisfies \(\hat{r}(x;{\varvec{x}},{\varvec{f}}) = r(x;{\varvec{x}},\varvec{\hat{f}})\).
Corollary 4
Assume that the weights \(w_0,\dots ,w_n\) can be computed as in (24) and the values \(\lambda _0(x),\dots ,\lambda _{nd}(x)\) as in (27). Then there exists for any \({\varvec{f}}\in \mathbb {F}^{n+1}\) some \(\varvec{\hat{f}}\in \mathbb {F}^{n+1}\) with
for \(\epsilon \) small enough, such that the first barycentric form in (7) satisfies \(\hat{r}(x;{\varvec{x}},{\varvec{f}}) = r(x;{\varvec{x}},\varvec{\hat{f}})\).
Upper bound for \(\Gamma _d\)
In case of the first barycentric form, the bounds for the forward and backward stability depend on the function \(\Gamma _d\) in (26), and we still need to show that this function is bounded from above. Note that \(\Gamma _0=\Lambda _n\), because \(w_i={(1)}^i\) and \(\lambda _i={(1)}^i/(xx_i)\) if \(d=0\), so that the bound for the Lebesgue constant of Berrut’s interpolant [8] also holds for \(\Gamma _d\) in this case. In the following, we therefore assume \(d\ge 1\) and define
so that \(\Gamma _d(x;{\varvec{x}})=N(x)/D(x)\). We further assume that \(x\in (x_k,x_{k+1})\) for some \(k\in \{0,\dots ,n1\}\). It then follows from the definition in (8) that all \(\lambda _i\) with index in
where \(I=\{0,\dots ,nd\}\), have the same sign as \({(1)}^{k+d}\) and that the sign alternates for decreasing indices “to the left” and increasing indices “to the right” of \(I_3\). More precisely, the \(\lambda _i\) with index in
have the same sign as the ones with index in \(I_3\), while the sign is opposite, if the index is in
Without loss of generality, we assume that the \(\lambda _i\) are positive for \(i\in I_2,I_3,I_4\) and negative for \(i\in I_1,I_5\), since multiplying all \(\lambda _i\) with a common constant does not change \(\Gamma _d\). Letting
we then find that
To bound \(\Gamma _d\), we need to bound the sum of the negative \(\lambda _i\) with \(i\in I_1, I_5\) as well as the \(\lambda _i\) with \(i\in I_3\), which in turn requires to first bound the terms \(xx_i\). To this end, let \(h_i=x_{i+1}x_i\) for \(i\in \{0,\dots ,n1\}\) and define
Proposition 1
For any \(i\in \{0,\dots ,n\}\), the distance between \(x\in (x_k,x_{k+1})\) and \(x_i\) is bounded as
where \(t=2(xx_k)/h_k1\in (1,1)\).
Proof
The statement follows directly by noting that
for the given t and because \(h_{\min }\le h_i\le h_{\max }\) for any \(i\in \{0,\dots ,n1\}\). \(\square \)
For bounding the negative \(\lambda _i\) from above, it turns out to be useful to consider them in pairs, with indices from \(I_1\) and \(I_5\) at the same “distance” from \(I_3\).
Lemma 4
For any \(j\in \mathbb {N}\) and \(x\in (x_k,x_{k+1})\),
where we set \(\lambda _i(x)=0\) for any \(i\notin I\).
Proof
Since the denominator of \(\lambda _{kd2j+1}(x)\), for \(kd2j+1\ge 0\), contains the terms \(xx_{k2j+1m}\) for \(m=0,\dots ,d\), it follows from Proposition 1 that
with \(t\in (1,1)\) and likewise
for \(k+2j\le nd\). Combining both bounds, we get
where
As g is clearly even and
it remains to show that \(g(t)\le g(1)\) for \(t\in [0,1]\). To this end, note that \(g(t)=p(t)/q(t)\) for
and
By the product rule,
and
For \(t\in [0,1]\), we observe that \(p(t)>0\), \(q(t)>0\), \(p'(t)\ge 0\), and \(q'(t)\le 0\), hence
and it follows from the quotient rule that g is monotonically increasing over [0, 1]. \(\square \)
Next, let us bound the \(\lambda _i\) with indices in \(I_3\) from below.
Lemma 5
For any \(i\in I_3\) and \(x\in (x_k,x_{k+1})\),
Proof
Since the denominator of \(\lambda _i(x)\), for \(i\in I_3\), contains the factors \(xx_{km}\) for \(m=0,\dots ,ki\) and the factors \(x_{k+1+l}x\) for \(l=0,\dots ,i+dk1\), we conclude from Proposition 1 that
and the statement then follows, because \(a!b!\le (a+b1)!\) for any \(a,b\in \mathbb {N}\). \(\square \)
We are now ready to derive a general upper bound on the function \(\Gamma _d\) in (26), which turns out to depend on d and the mesh ratio
Theorem 3
If \(d\ge 1\), then
for any set of ascending interpolation nodes \({\varvec{x}}=(x_0,\dots ,x_n)\in \mathbb {R}^{n+1}\) and any \(x\in [x_0,x_n]\).
Proof
If \(x=x_k\) for some \(k\in \{0,\dots ,n\}\), then, after multiplying both N(x) and D(x) by \(\prod _{i=0}^n{xx_i}\), we find that \(\Gamma _d(x;{\varvec{x}})=1\), which is clearly smaller than the upper bound in (34). Otherwise, there exists some \(k\in \{0,\dots ,n1\}\), such that \(x\in (x_k,x_{k+1})\), and it follows from Lemma 4 that
where the sum of the series can be found in [17, p. 464]. Together with Lemma 5, this implies
for any \(i\in I_3\). As \(\lambda _i(x)\le S_3\le D(x)\) and \(N(x)D(x)=2(S_1+S_5)\), we conclude that
and the statement then follows immediately. \(\square \)
In the case of equidistant nodes, when the mesh ratio is \(\mu =1\), the upper bound in (34) is simply \(1+1/(2d)\), so it becomes smaller as d grows. For other nodes, \(\mu \) may depend on n, which may result in very large upper bounds. For example, in the case of Chebyshev nodes, one can show that \(\mu \) grows asymptotically linear in n. However, our numerical experiments suggest that the function \(\Gamma _d\) is always small, and we believe that the upper bound in Theorem 3 can be improved significantly in future work.
Numerical experiments
We performed numerous experiments to verify the results proven in the previous sections numerically and report some representative results below. In particular, we analyze the various algorithms that implement the first barycentric form (7) both in terms of stability and computational cost (Sect. 7.1) and focus on the comparison between the first and the second form for an example where \(\Lambda _n\gg \Gamma _d\) (Sect. 7.2).
Our experimental platform is a Windows 10 laptop with 1.8 GHz Intel Core i710510U processor and 16 GB RAM, and we implemented all algorithms in C++. In what follows, the ‘exact’ values were computed in multipleprecision (1024 bit) floating point arithmetic using the MPFR library [18], while all other values were computed in standard double precision. Moreover, we took care of providing all input data (interpolation nodes, data values, and evaluation points) in double precision, so that they do not cause any additional error.
Comparison of algorithms for the first barycentric form
For the first example, we consider the case of \(n+1\) interpolation nodes \(x_i={\textrm{fl}}(y_i)\in [1,1]\) for \(i=0,\dots ,n\), derived from the equidistant nodes \(y_i=2i/n1\) by rounding them to double precision, and associated (rounded) data \(f_i={\textrm{fl}}(f(y_i))\), sampled from the test function
and we compare three ways of evaluating the first barycentric form in (7), which differ in the way the denominator is computed.
The first algorithm simply evaluates the functions \(\lambda _i\) as in (8), leading to the error mentioned in Lemma 2 and then sums up these values to get the denominator. The second algorithm by Camargo [11, Section 4.1] instead increases the stability of the summation by first computing sums of pairs of \(\lambda _i\)’s such that all these sums have the same sign. The third algorithm implements our iterative strategy in (31) before taking the sum, which is more efficient than the first algorithm, but less precise (cf. Lemma 3). All three algorithms compute the numerator of the first barycentric form in the same way, first dividing the weights \(w_i\) by \(xx_i\), then multiplying the results by \(f_i\), and finally summing up these values. The \(w_i\) themselves are precomputed with the pyramid algorithm in (29) and (30). Note that, although the weights for equidistant nodes are integer multiples of each other in theory [6], they do not have this property in this example, because the nodes \(x_i\) are not exactly equidistant, because of the rounding.
To compare these three algorithms, we used them to evaluate the barycentric rational interpolant with weights in (6) for \(d\in \{1,5,25\}\) and an increasing number of interpolation nodes, \(n\in \{39,79,159,319,639,1279\}\), at 50,000 random points from \([1+10^3\epsilon ,110^3\epsilon ]\setminus \{x_0,\dots ,x_n\}\). Figure 1 shows the corresponding running times and the distribution of the relative forward error of the computed values. For the latter, we chose a box plot, where the bottom and top of each box represent the interquartile range, from the first to the third quartile, and the line in the middle shows the median of the relative forward errors. The whiskers range from the minimum to the maximum, excluding those values that are more than 1.5 times the interquartile range greater than the third quartile, which are instead considered outliers and shown as isolated points.
On the one hand, we observe that Camargo’s algorithm beats the standard algorithm in terms of running time, but that our efficient algorithm is the fastest, especially as d grows, because its time complexity does not depend on d. On the other hand, our efficient algorithm is less precise than the standard algorithm, as predicted by Lemma 3 and Camargo’s algorithm gives the smallest errors, except for \(d=5\) and \(n\in \{639,1279\}\). Nevertheless, the computations confirm the forward stability for all three algorithms. For Camargo’s algorithm, this follows from the backward stability, which is proven in [11], and for the other two algorithms it is implied by Corollary 2 and Lemmas 1–3.
The rather large errors of the outliers in the case \(d=25\) can be explained by the behaviour of the componentwise relative condition number \(\kappa \), shown in Figure 2. While \(\kappa \) is small for all \(x\in [1,1]\) in the case of \(d=1\), it starts to grow considerably close to the end points of the interval as d grows, up to \(10^6\) for \(n=39\) and \(10^8\) for \(n=1279\) in the case of \(d=25\), and so does the upper bound on the relative forward error in (28). While this upper bound also depends on \(\Gamma _d\), Figure 2 shows that this function is always small and its maximum even decreases as d grows, independently of n. This is in agreement with Theorem 3, because \(\mu \approx 1\) in this example. The fact that the maximum error still seems to decrease for \(d=25\) as n increases is simply due to the fact that the 50,000 sample points are not sufficiently many to “catch” the worst case, because the region near the endpoints where \(\kappa \) grows rapidly actually shrinks as n grows.
Of course, it is also possible to evaluate the rational interpolant using the second barycentric form (4), which is actually the best choice for this example, giving relative forward errors that are similar to the ones of the standard algorithm for the first barycentric form, but being roughly twice as fast as the efficient algorithm, both of which is not surprising. Regarding the efficiency, the second form is superior, because the denominator can be computed “onthefly” at almost no extra cost during the evaluation of the numerator. As for the error, we note that \(\Lambda _n\) is much smaller than \(\kappa \) for the nodes used in this example [10] and so the upper bound in (25) is dominated by \(\kappa \), exactly as the upper bound in (28). However, for nonuniform nodes, the situation can be quite different, as the next example shows.
Worstcase comparison of first and second barycentric form
The aim of the second example is to compare the standard algorithm for the first barycentric form in (7), as described in Sect. 7.1, with a straightforward implementation of the second barycentric form, following the formula in (4). The weights \(w_i\) are again precomputed with the pyramid algorithm.
We consider \(n=29\), \(d=3\), and interpolation nodes \(x_i={\textrm{fl}}(y_i)\in [0,1]\) for \(i=0,\dots ,n\), obtained by rounding to double precision the values \(y_i=F(t_i)\), where
and \(t_i=i/n\). We choose these nodes, because the functions \(\Lambda _n\) and \(\Gamma _d\) behave completely differently in this case, as shown in Fig. 3. While \(\Lambda _n\) reaches huge values, up to \(10^{17}\), \(\Gamma _d\) is small (even though the latter is not guaranteed by Theorem 3). Hence, the upper bounds in Corollaries 1 and 2 suggest that we may see a big difference in the forward stability of the first and the second barycentric form, if we choose the data such that the condition number \(\kappa \) is small.
One such choice, which is also presented by Higham for the case \(d=n\) with equidistant nodes [1], is to take the data
which can be interpreted as having been sampled from the nth Lagrange basis polynomial \(l_n(x)=\prod _{i=0}^{n1}\frac{xx_i}{x_nx_i}\), that is, \(f_i={\textrm{fl}}(l_n(y_i))\). For this data, we know that \(\kappa (x;{\varvec{x}},{\varvec{f}})=1\) for all \(x\in [0,1]\), so the upper bounds on the relative forward errors in (25) and (28) are dominated by \(\Lambda _n\) and \(\Gamma _d\), respectively. Consequently, as shown in Fig. 4, the barycentric rational interpolant is reproduced faithfully by the first form, but not by the second, because the relative forward error for the first form is on the order of \(\epsilon \), while it can be on the order of 1 for the second form; see Fig. 5 (left).
However, the opposite may happen as well. If we consider the data
sampled from the constant one function, then \(\kappa =\Lambda _n\), and the upper bounds in (25) and (28) suggest that both forms can be unstable, even though the barycentric rational interpolant is simply \(r(x)=1\). Figure 5 (right) and Figure 6 confirm that this is indeed the case for the first barycentric form. However, the second barycentric form is perfectly stable, because the numerator and the denominator in (4) are identical and cancel out to always give the correct function value 1.
Conclusion
Barycentric interpolation offers a fast and stable means of evaluating the polynomial Lagrange interpolant using either the first or the second barycentric form. While the first form is always numerically stable, the second form is stable only for interpolation nodes with a small Lebesgue constant.
Evaluating a rational interpolant via the second barycentric form comes with the same limitation, but for the special family of barycentric rational interpolants with weights in (6), a computationally more attractive first barycentric form is available. Instead of depending on the Lebesgue constant, both the forward and the backward stability of a straightforward implementation of this first barycentric form depend on the function \(\Gamma _d\) in (26).
Unlike the Lebesgue constant, Theorem 3 shows that the maximum of \(\Gamma _d\) is independent of n. Moreover, it is guaranteed to be very small for equidistant nodes, regardless of d, while the Lebesgue constant is known to grow logarithmically in n and exponentially in d in this case. Based on our numerical experiments, the maximum of \(\Gamma _d\) seems to be small for other distributions of interpolation nodes, too, even if the mesh ratio \(\mu \) is big, as in the example in Sect. 7.2, and we believe that the upper bound in (34) can be improved considerably in future work. For example, if \(d=n\), then \(\Gamma _d\) is just the constant one function, independent of \(\mu \).
Overall, we conclude that the most efficient way of stably evaluating a barycentric rational interpolant with weights in (6) is by determining the weights with the pyramid algorithm in (29) and (30) in a preprocessing step, and, for every evaluation point x, first computing the values \(\lambda _i(x)\) with our iterative strategy in (31) and then the value of the interpolant using a straightforward implementation of the first barycentric form in (7). Alternatively computing the sum of the \(\lambda _i(x)\) in the denominator with Camargo’s algorithm [11] results in slightly smaller forward errors, but is significantly slower, especially for larger d.
Change history
09 November 2022
A Correction to this paper has been published: https://doi.org/10.1007/s0021102201330y
Notes
This is somewhat similar to counting the number of floating point operations with Stewart’s relative error counter [16] and sufficient for our needs.
References
Higham, N.J.: The numerical stability of barycentric Lagrange interpolation. IMA J. Numer. Anal. 24(4), 547–556 (2004). https://doi.org/10.1093/imanum/24.4.547
Berrut, J.P., Trefethen, L.N.: Barycentric Lagrange interpolation. SIAM Rev. 46(3), 501–517 (2004). https://doi.org/10.1137/S0036144502417715
Berrut, J.P., Mittelmann, H.D.: Lebesgue constant minimizing linear rational interpolation of continuous functions over the interval. Comput. Math. Appl. 33(6), 77–86 (1997). https://doi.org/10.1016/S08981221(97)000345
Schneider, C., Werner, W.: Some new aspects of rational interpolation. Math. Comput. 47(175), 285–299 (1986). https://doi.org/10.1090/S00255718198608421368
Berrut, J.P.: Rational functions for guaranteed and experimentally wellconditioned global interpolation. Comput. Math. Appl. 15(1), 1–16 (1988). https://doi.org/10.1016/08981221(88)900673
Floater, M.S., Hormann, K.: Barycentric rational interpolation with no poles and high rates of approximation. Numer. Math. 107(2), 315–331 (2007). https://doi.org/10.1007/s002110070093y
Salazar Celis, O.: Practical rational interpolation of exact and inexact data: theory and algorithms. PhD thesis, Department of Computer Science, University of Antwerp (2008)
Bos, L., De Marchi, S., Hormann, K., Sidon, J.: Bounding the Lebesgue constant for Berrut’s rational interpolant at general nodes. J. Approx. Theory 169, 7–22 (2013). https://doi.org/10.1016/j.jat.2013.01.004
Bos, L., De Marchi, S., Hormann, K., Klein, G.: On the Lebesgue constant of barycentric rational interpolation at equidistant nodes. Numer. Math. 121(3), 461–471 (2012). https://doi.org/10.1007/s0021101104428
Hormann, K., Klein, G., De Marchi, S.: Barycentric rational interpolation at quasiequidistant nodes. Dolomit. Res. Notes Approx. 5(1), 1–6 (2012). https://doi.org/10.14658/pupjdrna201211
de Camargo, A.P.: On the numerical stability of Floater–Hormann’s rational interpolant. Numer. Algorithms 72(1), 131–152 (2016). https://doi.org/10.1007/s110750150037z
Hormann, K., Schaefer, S.: Pyramid algorithms for barycentric rational interpolation. Comput. Aided Geom. Des. 42, 1–6 (2016). https://doi.org/10.1016/j.cagd.2015.12.004
Mascarenhas, W., Camargo, A.: The backward stability of the second barycentric formula for interpolation. Dolomit. Res. Notes Approx. 7(1), 1–12 (2014). https://doi.org/10.14658/pupjdrna201411
Trefethen, L.N., Bau, D.: Numerical Linear Algebra. SIAM, Philadelphia (1997)
Gohberg, I., Koltracht, I.: Mixed, componentwise, and structured condition numbers. SIAM J. Matrix Anal. Appl. 14(3), 688–704 (1993). https://doi.org/10.1137/0614049
Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)
Bronshtein, I.N., Semendyayev, K.A., Musiol, G., Mühlig, H.: Handbook of Mathematics, 6th edn. Springer, Berlin (2015)
Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: A multipleprecision binary floatingpoint library with correct rounding. ACM Trans. Math. Softw. (2007). https://doi.org/10.1145/1236463.1236468
Acknowledgements
This work was supported by the Swiss National Science Foundation (SNSF) under project number 188577.
Funding
Open access funding provided by Università della Svizzera italiana
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fuda, C., Campagna, R. & Hormann, K. On the numerical stability of linear barycentric rational interpolation. Numer. Math. 152, 761–786 (2022). https://doi.org/10.1007/s0021102201316w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0021102201316w
Mathematics Subject Classification
 65D05
 41A20
 65G50