Skip to main content
Log in

The effects of rounding errors in the nodes on barycentric interpolation

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

We analyze the effects of rounding errors in the nodes on polynomial barycentric interpolation. These errors are particularly relevant for the first barycentric formula with the Chebyshev points of the second kind. Here, we propose a method for reducing them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Berrut, J.-P.: Rational functions for guaranteed and experimentally well conditioned global interpolation. Comput. Math. Appl. 15, 1–16 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bos, L., De Marchi, S., Hormann, K.: On the Lebesgue constant of Berrut’s rational interpolant at equidistant nodes. J. Comput. Appl. Math. 236(4), 504–510 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bos, L., De Marchi, S., Hormann, K., Sidon, J.: Bounding the Lebesgue constant for Berrut’s rational interpolant in general nodes. J. Approx. Theory 169, 7–22 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dekker, T.J.: A floating-point technique for extending the available precision. Numer. Math. 18, 224–242 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  5. Floater, M., Hormann, K.: Barycentric rational interpolation with no poles and high rates of approximation. Numer. Math. 107, 315–331 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Higham, N.J.: The numerical stability of barycentric Lagrange interpolation. IMA J. Numer. Anal. 24, 547–556 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  7. Higham, N.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)

    Book  MATH  Google Scholar 

  8. Mascarenhas, W.F.: The stability of barycentric interpolation at the Chebyshev points of the second kind. Numer. Math. (2014). Available online. doi:10.1007/s00211-014-0612-6

  9. Mascarenhas, W.F., de Camargo, A.: Pierro, On the backward stability of the second barycentric formula for interpolation. Dolomites Res. Notes Approx. 7, 1–12 (2014)

  10. Mascarenhas, W.F., de Camargo, Pierro, A.: The effects of rounding errors on barycentric interpolation (extended version, with complete proofs). arXiv:1309.7970v3 [math.NA], 12 Jan 2016

  11. Powell, M.J.D.: Approximation Theory and Methods. Cambridge University Press, Cambridge (1981)

    MATH  Google Scholar 

  12. Salzer, H.: Lagrangian interpolation at the Chebyshev points \(x_{n,\nu } \equiv \cos ({\nu \pi /n}),\nu = 0(1)n\); some unnoted advantages. Comput. J. 15(2), 156–159 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  13. Taylor, W.J.: Method of Lagrangian curvilinear interpolation. J. Res. Nat. Bur. Stand. 35, 151–155 (1945)

    Article  MathSciNet  MATH  Google Scholar 

  14. Werner, W.: Polynomial interpolation: Lagrange versus Newton. Math. Comput. 43, 205–207 (1984)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Walter F. Mascarenhas.

Additional information

Walter F. Mascarenhas, supported by Grant 2013/109162 from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and André Pierro de Camargo, supported by Grant 14225012012-0 from CNPq.

Appendices

Appendix 1: Proofs

This appendix proves the main results in the previous sections. We state four more lemmas and prove the main theorems, and the lemmas and the remaining theorems are proved on the online version. The proof of Lemma 3 in this online version also shows that

$$\begin{aligned} \left| z_k\right|\le & {} \frac{a_k}{1 - a_k} \le \frac{2.4512 \left\| \mathbf {x} - {\hat{\mathbf {x}}} \right\| _\infty n^2}{1 - 4.5104 \times 10^{-3}} \le 2.4624 \left\| \mathbf {x} - {\hat{\mathbf {x}}} \right\| _\infty n^2\nonumber \\\le & {} 1.1328 \times 10^{-15} n^2\le 4.5312 \times 10^{-3}.\end{aligned}$$
(57)
$$\begin{aligned} \left\| \mathbf {z} \right\| _1\le & {} 0.71094 \left\| \mathbf {x} - {\hat{\mathbf {x}}} \right\| _\infty \left( 2.9 + \log n \right) ^2 n^2\nonumber \\\le & {} 3.2704 \times 10^{-16} \left( 2.9 + \log n \right) ^2 n^2 \le 0.39646\end{aligned}$$
(58)
$$\begin{aligned} \varDelta _k\le & {} 2 \kappa \mu + \frac{\kappa n^2}{4 \sqrt{2}} \left( 1 + \log 3 + \frac{3}{2} \log 9 \right) \le 2.7267 \left\| \mathbf {x} - {\hat{\mathbf {x}}} \right\| _\infty n^2\nonumber \\\le & {} 1.2543 \times 10^{-15} n^2\le 5.0172 \times 10^{-3}. \end{aligned}$$
(59)

and the proof of Lemma 4 shows that

$$\begin{aligned} \varLambda _{-1,1,\mathbf {x}^c} \le \frac{2}{\pi } \left( \log n + \gamma + \log \frac{8}{\pi } + \frac{\pi ^2}{144 n^2} \right) \le 0.63662 \left( \log n + 1.5127 \right) .\qquad \end{aligned}$$
(60)

Lemma 9

Given a vector \(\mathbf {v} = \left( v_0,v_1 \dots v_n \right) ^t \in {\mathbb {R}}^{n+1},\) define \(s := \sum _{k=0}^n v_k,\) \(\ a := \sum _{k = 0}^n \left| v_k\right| ,\)

$$\begin{aligned} s_- := {s_-}\left( \mathbf {v}\right) := - \sum _{k = 0}^n \min {\left\{ v_k,0 \right\} } \quad \mathrm {and}\quad s_+ := {s_+}\left( \mathbf {v}\right) := \sum _{k = 0}^n \max {\left\{ v_k,0 \right\} }. \end{aligned}$$

If \(s_- < 1\) then

$$\begin{aligned} -s \le \prod _{k=0}^n \frac{1}{1 + v_k} - 1 \le \frac{s_-}{\left( 1 - s_- \right) \left( 1 + s_+ \right) } - \frac{s_+}{1 + s_+} \le \frac{s_-}{1 - s_-} \end{aligned}$$
(61)

and

$$\begin{aligned} \prod _{k =0}^n \left( 1 + v_k \right) - 1 - s \ge - s_- s_+ \ge -\frac{a^2}{4}. \end{aligned}$$
(62)

Moreover,  if \(s_- < 1\) and \(s < 1\) then

$$\begin{aligned} \prod _{k =0}^n \left( 1 + v_k \right) - 1 - s \le \frac{s^2}{1 - s}. \end{aligned}$$
(63)

Finally,  if \(a < 1,\) then,  for \(0 \le m \le n,\) the product

$$\begin{aligned} P := \left( \prod _{k = 0}^m \frac{1}{1 + v_k} \right) \left( \prod _{k= m+1}^n \left( 1 + v_k \right) \right) \end{aligned}$$

satisfies

$$\begin{aligned} 1 - a \le P \le \frac{1}{1- a} \quad \mathrm {and}\quad \left| P - 1\right| \le \frac{a}{1 - a}. \end{aligned}$$
(64)

Lemma 10

For \(1 \le k \le n/2,\) the Chebyshev points of the second kind satisfy

$$\begin{aligned} \sum _{j = 0}^{k-1} \frac{1}{x_k - x_j} \le \frac{1}{{2 \sin }\left( \frac{k \pi }{n}\right) {\sin }\left( \frac{k \pi }{2n}\right) } \left( 1 + k \, {\log }\left( 4k - 1\right) \right) \le \frac{n^2}{4 \sqrt{2} \ k^2} \left( 1 + k \, {\log }\left( 4k - 1\right) \right) \nonumber \\ \end{aligned}$$
(65)

and

$$\begin{aligned} \sum _{j = k+1}^{n} \frac{1}{x_j - x_k} \le \frac{3n^2}{4 \sqrt{2}\, k} {\log }\left( 4 k + 1\right) . \end{aligned}$$
(66)

In particular,  since (65) and (66) decrease with k,  for \(1 \le k \le n/2\)

$$\begin{aligned} \sum _{j \ne k} \frac{1}{\left| x_k - x_j\right| }\le & {} \frac{n^2}{4 \sqrt{2}} \left( 1 + \log \!3 + 3 \log \! 5 \right) < 1.2246\, n^2\quad \mathrm {and}\\ \sum _{j=1}^n \frac{1}{x_j -x_0}\le & {} \frac{\pi ^2 n^2}{12} < 0.82247 \, n^2. \end{aligned}$$

Moreover,  if \(n \ge 10\) then,  for \(0 \le k \le n,\)

$$\begin{aligned} \frac{1}{x_{k+1} - x_k} \le 0.20432 \, n^2. \end{aligned}$$

Lemma 11

If \(x^- = x_0,\) \(x^+ = x_n\) and

$$\begin{aligned} \mu := \frac{1}{\min _{0 \le k < n} \left( x_{k+1}- x_k \right) } \end{aligned}$$

is such that \(2 \mu \left\| {\hat{\mathbf {x}}} - \mathbf {x} \right\| _\infty < 1,\) then \(\delta _{jk},\) \(\varDelta _k\) in (13)–(15) and \(a_k = \sum _{j \ne k} \delta _{jk}\) satisfy

$$\begin{aligned} \left| \delta _{jk}\right| \le \frac{\kappa }{\left| x_j - x_k\right| }\quad \mathrm {for}\ j \ne k,\qquad a_k \le \kappa \sum _{j \ne k} \frac{1}{\left| x_j - x_k\right| } \end{aligned}$$
(67)

and,  for \(0 \le k < n,\)

$$\begin{aligned} \varDelta _k \le \kappa \left( \sum _{j = 0}^{k-1} \frac{1}{x_k - x_j} + \frac{2}{x_{k+1} - x_k} + \sum _{j = k + 2}^n \frac{1}{x_j - x_{k+1}} \right) , \end{aligned}$$
(68)

for

$$\begin{aligned} \kappa := \frac{2 \left\| {\hat{\mathbf {x}}} - \mathbf {x} \right\| _\infty }{1 - 2 \mu \left\| {\hat{\mathbf {x}}} - \mathbf {x} \right\| _\infty }. \end{aligned}$$
(69)

Lemma 12

Consider nodes \(\mathbf {x} = {\left\{ x_0,\ldots ,n \right\} }\) and interval \([x^-,x^+]\). Given \(0 \le k \le n,\) define \(\mathbf {x}' := {\left\{ x_0,\ldots ,n \right\} } {\setminus } {\left\{ x_k \right\} }\). If \(x \in [x^-,x^+]\) then

$$\begin{aligned} \left| {P}\left( x;\mathbf {x},\mathbf {y}\right) - y_k\right| \le \varLambda _{x^-,x^+,\mathbf {x}'} \left| x - x_j\right| \max _{j \ne k} \left| \frac{y_j - y_k}{x_j - x_k}\right| . \end{aligned}$$
(70)

1.1 Proofs of the main results

Proof of Theorem 1

Note that, for \(0 \le k \le n\),

$$\begin{aligned} \left| {Q}\left( x_k\right) - {P}\left( x_k;\mathbf {x},\mathbf {y}\right) \right| \le \left| {Q}\left( x_k\right) - {f}\left( x_k\right) \right| + \left| {f}\left( x_k\right) - {P}\left( x_k;\mathbf {x},\mathbf {y}\right) \right| \le M, \end{aligned}$$

and Eq. (18) leads to

$$\begin{aligned} \left| {Q}\left( x\right) - {P}\left( x;\mathbf {x},\mathbf {y}\right) \right| \le \varLambda _{\hat{x}^-,\hat{x}^+,\mathbf {x}} M \end{aligned}$$
(71)

for all \(x \in [\hat{x}^-,\hat{x}^+]\). Similarly,

$$\begin{aligned} \left| {Q}\left( \hat{x}_k\right) - {P}\left( \hat{x}_k;{\hat{\mathbf {x}}},\mathbf {y}\right) \right|\le & {} \left| {Q}\left( \hat{x}_k\right) - {f}\left( \hat{x}_k\right) \right| + \left| {f}\left( \hat{x}_k\right) - {f}\left( x_k\right) \right| \\&+ \left| {f}\left( x_k\right) - {P}\left( \hat{x}_k;{\hat{\mathbf {x}}},\mathbf {y}\right) \right| \le M + L \left| \hat{x}_k - x_k\right| , \end{aligned}$$

and

$$\begin{aligned} \left| {Q}\left( x\right) - {P}\left( x;{\hat{\mathbf {x}}},\mathbf {y}\right) \right| \le \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \left( M + L \left\| {\hat{\mathbf {x}}} - \mathbf {x} \right\| _\infty \right) . \end{aligned}$$
(72)

Equation (29) follows from (71), (72) and the triangle inequality. Equations (26) and (60) lead to

$$\begin{aligned} \varLambda _{-1,1,{\hat{\mathbf {x}}}^c} + \varLambda _{-1,1,\mathbf {x}^c} \le 2.0629 \varLambda _{-1,1,\mathbf {x}^c} \le 1.3133 \left( \log n + 1.5127 \right) . \end{aligned}$$
(73)

Equation (31) follows from the bound last two bounds and (29). \(\square \)

Proof of Theorem 2

Given \(\hat{x} \in [\hat{x}^-,\hat{x}^+]\), the arguments used in the proof of Theorem 3.2 in [6] show that the first formula p with rounded nodes \({\hat{\mathbf {x}}}\) satisfies

$$\begin{aligned} {\mathrm {fl}}\left( {p}\left( \hat{x},{\hat{\mathbf {x}}},\mathbf {y},\hat{\mathbf {w}}\right) \right) = \sum _{k = 0}^n \hat{w}_k y_k \langle {3 n + 5}\rangle _k \prod _{j \ne k} \left( \hat{x} - \hat{x}_k \right) \!, \end{aligned}$$
(74)

where the \(\langle {m}\rangle \) are the Stewart’s relative error counters described in [7]. Lemma 2 yields x satisfying (32) and \(\beta _0, \dots , \beta _n\) such that \(\left| \beta _k\right| \le \varDelta / \left( 1 - \varDelta \right) \) and

$$\begin{aligned} \prod _{j \ne k} \left( \hat{x} - \hat{x}_k \right) = \left( 1 + \beta _k \right) \prod _{j \ne k} \left( x - x_k \right) . \end{aligned}$$

Combining this equation with (74) we obtain

$$\begin{aligned} {\mathrm {fl}}\left( {p}\left( \hat{x},{\hat{\mathbf {x}}},\mathbf {y},\hat{\mathbf {w}}\right) \right)= & {} \sum _{k = 0}^n w_k y_k \left( 1 + \beta _k \right) \langle {3 n + 5}\rangle _k \left( 1 + \frac{\hat{w}_k - w_k}{w_k} \right) \prod _{j \ne k} \left( x - x_k \right) \nonumber \\= & {} {p}\left( x,\mathbf {x},\tilde{\mathbf {y}},\mathbf {w}\right) \end{aligned}$$
(75)

for \(\tilde{y}_k\) in (33) and Theorem 2 follows from Lemma 3.1 in [7], which states that \(\left| \langle {m}\rangle - 1\right| \le m \varepsilon / \left( 1 - m \varepsilon \right) \) when \(m \varepsilon < 1\). \(\square \)

Proof of Theorem 3

Taking \(\mathbf {x} = \hat{\mathbf {x}}\), and using Eqs. (1), (16) and (75), with \(w_k = {\lambda _k}\left( \hat{\mathbf {x}}\right) \) and \(\beta _k = 0\), and the fact that the first formula with weights \({\lambda }\!\!\left( {\hat{\mathbf {x}}}\right) \) interpolates \(\mathbf {y}\) at the nodes \({\hat{\mathbf {x}}}\), we obtain

$$\begin{aligned}&{\mathrm {fl}}\left( {p}\left( \hat{x},{\hat{\mathbf {x}}},\mathbf {y},\hat{\mathbf {w}}\right) \right) - {P}\left( \hat{x};{\hat{\mathbf {x}}},\mathbf {y}\right) = \left( \prod _{k = 0}^n \left( \hat{x} - \hat{x}_k \right) \right) \nonumber \\&\qquad \times \left( \sum _{k=0}^n \frac{{\lambda _k}\!\!\left( {\hat{\mathbf {x}}}\right) y_k \left( 1+ z_k \right) \langle {3n + 5}\rangle _k}{\hat{x} - \hat{x}_k} - \sum _{k=0}^n \frac{{\lambda _k}\!\!\left( {\hat{\mathbf {x}}}\right) y_k}{\hat{x} - \hat{x}_k} \right) \nonumber \\&\quad = \left( \prod _{k = 0}^n \left( \hat{x} - \hat{x}_k \right) \right) \left( \sum _{k = 0}^n \frac{{\lambda _k}\!\!\left( {\hat{\mathbf {x}}}\right) y_k \nu _k }{\hat{x} - \hat{x}_k} \right) = {P}\left( \hat{x};{\hat{\mathbf {x}}},\mathbf {\mathbf {y \varvec{\nu }}}\right) , \end{aligned}$$
(76)

for

$$\begin{aligned} \nu _k := \left( 1 + z_k \right) \langle {3 n + 5}\rangle _k - 1 = z_k \langle {3 n + 5}\rangle _k + \langle {3 n + 5}\rangle _k - 1. \end{aligned}$$

The definition of \(\langle {3 n + 5}\rangle _k\) in [7] and Lemma 9 show that

$$\begin{aligned} 1 - \left( 3 n + 5 \right) \varepsilon \le \langle {3 n + 5}\rangle _k \le \frac{1}{1 - \left( 3 n + 5 \right) \varepsilon }, \end{aligned}$$

and lead to (36) and (35) follows from (18). Finally, (37) follows from (76) and Lemma 12. \(\square \)

Proof of Theorem 4

Let us write \(x := \hat{x}\), consider the function

$$\begin{aligned} {g}\left( x,\mathbf {y},\mathbf {w}\right) := \sum _{j = 0}^n \frac{w_k y_k}{x - \hat{x}_k}, \end{aligned}$$

and \(\tilde{\mathbf {w}} := {\lambda _k}\left( {\hat{\mathbf {x}}}\right) \). By the definition of \(z_k\) in (16), \(\hat{\mathbf {w}} = \tilde{\mathbf {w}} + \tilde{\mathbf {w}}\mathbf {z}\), where \({\tilde{\mathbf {w}}z}\) is the component-wise product of \({\tilde{\mathbf {w}}}\) and \(\mathbf {z}\), i.e., \((\tilde{w}z)_k = \tilde{w}_k \, z_k\). It follows that, for \(\mathbf {u} \in {\mathbb {R}}^{n+1}\),

$$\begin{aligned} {g}\left( x,\mathbf {u},\hat{\mathbf {w}}\right)= & {} \sum _{j = 0}^n \frac{\hat{w}_k u_k}{x - u_k} = \sum _{j = 0}^n \tilde{w}_k \frac{\frac{\hat{w}_k - \tilde{w}_k}{\tilde{w}_k} u_k}{x - \hat{x}_k} + \sum _{j = 0}^n \frac{\tilde{w}_k u_k}{x - \hat{x}_k}\\= & {} \sum _{j = 0}^n \frac{\tilde{w}_k z_k u_k}{x - \hat{x}_k} + \sum _{j = 0}^n \frac{\tilde{w}_k u_k}{x - \hat{x}_k} = {g}\left( x,\mathbf {uz},\tilde{\mathbf {w}}\right) +{g}\left( x,\mathbf {u},\tilde{\mathbf {w}}\right) , \end{aligned}$$

and we can write the difference \({q}\left( x,{\hat{\mathbf {x}}},\mathbf {y},\hat{\mathbf {w}}\right) - {q}\left( x,{\hat{\mathbf {x}}},\mathbf {y},\tilde{\mathbf {w}}\right) \) in (38) as

$$\begin{aligned} {S}\left( x,\mathbf {y},\mathbf {z}\right):= & {} \frac{{g}\left( x,\mathbf {y},\hat{\mathbf {w}}\right) }{{g}\left( x,\mathbbm {1}_{} ,\hat{\mathbf {w}}\right) } - \frac{{g}\left( x,\mathbf {y},\tilde{\mathbf {w}}\right) }{{g}\left( x,\mathbbm {1}_{} ,\tilde{\mathbf {w}}\right) } = \frac{{g}\left( x,\mathbf {y},\tilde{\mathbf {w}}\right) + {g}\left( x,\mathbf {yz},\tilde{\mathbf {w}}\right) }{{g}\left( x,\mathbbm {1}_{} ,\tilde{\mathbf {w}}\right) + {g}\left( x,\mathbf {z},\tilde{\mathbf {w}}\right) } - \frac{{g}\left( x,\mathbf {y},\tilde{\mathbf {w}}\right) }{{g}\left( x,\mathbbm {1}_{} ,\tilde{\mathbf {w}}\right) } \\= & {} \frac{N + \varDelta \! N}{D + \varDelta \! D} - \frac{N}{D} \end{aligned}$$

for \(N := {g}\left( x,\mathbf {y},\tilde{\mathbf {w}}\right) \), \(\varDelta \! N := {g}\left( x,\mathbf {yz},\tilde{\mathbf {w}}\right) \), \(D := {g}\left( x,\mathbbm {1}_{} ,\tilde{\mathbf {w}}\right) \), and \(\varDelta \! D := {g}\left( x,\mathbf {z},\tilde{\mathbf {w}}\right) \). Since \(\frac{\varDelta \! N}{D} - \frac{N}{D} \frac{\varDelta \! D}{D}\) is equal to the Error Polynomial \( {E}\left( x\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) \) in (7), we get

$$\begin{aligned} {S}\left( x,\mathbf {y},\mathbf {z}\right) = \frac{1}{D} \left( \frac{N + \varDelta \! N}{1 + \frac{\varDelta \! D}{D}} - N \right) = \frac{1}{{1 + \frac{\varDelta \! D}{D}}} {E}\left( x\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) . \end{aligned}$$
(77)

The ratio \(\frac{\varDelta \! D}{D}\) is equal to \({P}\left( x, \hat{\mathbf {x}}, \mathbf {z}\right) \). Therefore, the denominator of (77) satisfies

$$\begin{aligned} 1 + \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \left\| \mathbf {z} \right\| _\infty\ge & {} 1 + \left| \frac{\varDelta \! D}{D}\right| \ge \left| 1 + \frac{\varDelta \! D}{D}\right| \ge 1 - \left| \frac{\varDelta \! D}{D}\right| \nonumber \\= & {} 1 - \left| {P}\left( x,\hat{\mathbf {x}},\mathbf {z}\right) \right| \ge 1 - \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}}\left\| \mathbf {z} \right\| _\infty . \end{aligned}$$
(78)

The forward bound (38) follows from (77) and (78) and we are done. \(\square \)

Proof of Theorem 5

Expanding the polynomials \({P}\left( x;{\hat{\mathbf {x}}},\mathbf {yz}\right) \) in (7) in Lagrange’s basis we obtain

$$\begin{aligned} \left| {E}\left( x\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) \right|= & {} \left| \sum _{k=0}^n z_k {\ell _{k}}\left( x;\, {\hat{\mathbf {x}}}\right) \left( y_k - {P}\left( x;{\hat{\mathbf {x}}},\mathbf {y}\right) \right) \right| \\\le & {} \sum _{k=0}^n \left| z_k {\ell _{k}}\left( x;\, {\hat{\mathbf {x}}}\right) \right| \left| {f}\left( \hat{x}_k\right) - {f}\left( x\right) \right| \\&+ \sum _{k=0}^n \left| z_k {\ell _{k}}\left( x;\, {\hat{\mathbf {x}}}\right) \right| \left| {f}\left( x\right) - {P}\left( x;{\hat{\mathbf {x}}},\mathbf {y}\right) \right| \\\le & {} L \left( \max _{0 \le k \le n} \left| {\ell _{k}}\left( x;\, {\hat{\mathbf {x}}}\right) \left( x - \hat{x}_k \right) \right| \right) \, \sum _{k=0}^n \left| z_k\right| \\&+\,\varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \left\| \mathbf {z} \right\| _\infty \sup _{\hat{x}^- \le x \le \hat{x}^+} \left| {f}\left( x\right) - {P}\left( x,{\hat{\mathbf {x}}},\mathbf {y}\right) \right| . \end{aligned}$$

This inequality yields (47). In order to prove (48), note that, according to Theorem 16.5 in page 196 of [11] (Jackson’s Theorem), there exists a polynomial \(P^*\) of degree n such that

$$\begin{aligned} \sup _{x \in [\hat{x}^-,\hat{x}^+]} \left| {f}\left( x\right) - {P^*}\!\!\left( x\right) \right| \le \frac{L \left( \hat{x}^+ - \hat{x}^- \right) \pi }{4 \left( n + 1 \right) }. \end{aligned}$$
(79)

Using the well known bound

$$\begin{aligned} \max _{\hat{x}^- \le x \le \hat{x}^+} \left| {f}\left( x\right) - {P}\left( x,{\hat{\mathbf {x}}},\mathbf {y}\right) \right| _\infty \le \left( 1 + \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \right) \sup _{\hat{x}^- \le x \le \hat{x}^+} \left| {f}\left( x\right) - {P^*}\!\!\left( x\right) \right| \end{aligned}$$

and (79) we deduce the that

$$\begin{aligned}&\varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \left\| \mathbf {z} \right\| _\infty \sup _{\hat{x}^- \le x \le \hat{x}^+} \left| {f}\left( x\right) - {P}\left( x,{\hat{\mathbf {x}}},\mathbf {y}\right) \right| \\&\quad \le \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \left( 1 + \varLambda _{\hat{x}^-,\hat{x}^+,{\hat{\mathbf {x}}}} \right) \left\| \mathbf {z} \right\| _\infty \frac{L \left( \hat{x}^+ - \hat{x}^- \right) \pi }{4 \left( n + 1 \right) }, \end{aligned}$$

and (48) follows from this last bound and (47). \(\square \)

Proof of Theorem 6

The definition of Lagrange polynomial (19) leads to

$$\begin{aligned} {\ell _{k}}\left( x;\, \mathbf {x}\right) \left( x - x_k \right) = {\lambda _k}\left( \mathbf {x}\right) \prod _{k = 0}^n \left( x - x_k \right) . \end{aligned}$$

In [12]’s notation, we have \({\lambda _k}\left( \mathbf {x}^c\right) = 1/{\phi _{n+1}}'\!\left( x_k^c\right) \) and from its equations (5) and (6) we obtain \({\lambda _k}\left( \mathbf {x}^c\right) \le 2^{n-1}/n\). At the top of the second column in page 156 of [12] we learn that \(\left| \prod _{k = 0}^n \left( x - x^c_k \right) \right| \le 2^{1 - n}\) for \(x \in [-1,1]\), and combining this bounds we obtain that

$$\begin{aligned} {\tau }\left( \mathbf {x}^c\right) = \max _{x \in [-1,1]} \left| {\ell _{k}}\left( x;\, \mathbf {x^c}\right) \left( x - x^c_k \right) \right| \le \left( 2^{n-1}/n \right) \times 2^{1 - n} = 1/n. \end{aligned}$$
(80)

The bounds (57)–(59) and (80) and Lemma 7 lead to

$$\begin{aligned} {\tau }\left( {\hat{\mathbf {x}}}^c\right) \left\| \mathbf {z} \right\| _1\le & {} \frac{1.0097}{n} \times 3.2704 \times 10^{-16} n^2 \left( 2.9 + \log n \right) ^2\\\le & {} 3.3022 \times 10^{-16} n \left( 2.9 + \log n \right) ^2. \end{aligned}$$

The bounds (57) and (26) show that, in Salzer’s case, for \(A = 0.67667\) and \(B = 1.0236\),

$$\begin{aligned}&\varLambda _{\hat{x}^-,\hat{x}^{+},{\hat{\mathbf {x}}}} \left( 1 + \varLambda _{\hat{x}^{-},\hat{x}^{+},{\hat{\mathbf {x}}}} \right) \left\| \mathbf {z} \right\| _\infty \frac{\left( \hat{x}^+ - \hat{x}^- \right) \pi }{4 \left( n + 1 \right) } \\&\quad \le \left( A \log n + B \right) \left( A \log n + B + 1 \right) \times 1.1328 \times 10^{-15} n^2 \frac{\pi }{2 \left( n + 1 \right) }\\&\quad \le \left( \log n + B/A \right) \left( \log n + \left( B + 1 \right) /A \right) \times A^2 \times 1.7794 \times 10^{-15} n\\&\quad \le 8.1476 \times 10^{-16} n \left( 2.9 + \log n \right) ^2, \end{aligned}$$

because \(\left( x + \frac{B}{A} \right) \left( x + \frac{B + 1}{A} \right) \le \left( 2.9 + x \right) ^2\) for all \(x \ge 0\). The last two bounds using Theorem 5 yield

$$\begin{aligned} \left| {E}\left( \hat{x}\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) \right| \le 1.1450 \times 10^{-15} L n \left( 2.9 + \log n \right) ^2, \end{aligned}$$

and Lemma 3 and Theorem 4 lead to

$$\begin{aligned} \left| {q}\left( \hat{x};{\hat{\mathbf {x}}}^c,\mathbf {y}, {\lambda }\left( \mathbf {x}^c\right) \right) - {P}\left( \hat{x};{\hat{\mathbf {x}}}^c,\mathbf {y}\right) \right| \le 1.2042 \times 10^{-15} L n \left( 2.9 + \log n \right) ^2, \end{aligned}$$

and we proved (50). Equation (51) follows from this bound, \(\left\| \hat{\mathbf {x}} - \mathbf {x} \right\| _\infty \le 4.6\times 10^{-6}\), \(n \ge 10\) and (31). \(\square \)

Proof of Theorem 7

The hypothesis of this theorem asks for \(n \le 10^{6}\). Therefore, \(2 n \le 2 \times 10^{16}\) and the bounds (44) and (26) for \(2 n + 1\) nodes yield

$$\begin{aligned} \max _{\hat{x}^- \le x \le \hat{x}^+} \left| {E}\left( x\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) \right| \le b_n L \left( A \log n + B + A \log 2 \right) \end{aligned}$$
(81)

for \(A = 0.67667\) and \(B = 1.0236\). Theorem 4, (57) and (26) for n nodes lead to an upper bound of

$$\begin{aligned} \frac{A L b_n }{1 - 10.841 \times 4.5312 \times 10^{-3}} \left( \log n + \frac{B}{A} + \log 2 \right) \le 0.71163 L b_n \left( \log n + 2.2059 \right) ,\nonumber \\ \end{aligned}$$
(82)

and this leads to (52). Finally, (53) follows from (82), \(\left\| \hat{\mathbf {x}} - \mathbf {x} \right\| _\infty \le 4.6\times 10^{-6}\) and Theorem 1. \(\square \)

Appendix 2: Experimental details

The details regarding the software and hardware used in the experiments are described in the online version. Here we describe the sets of points and bins used in the experiments. The sets \(X_{-1,n}\) and \(X_{0,n}\) in Tables 1, 4 and 5 contain \(10^5\) points each. These points are distributed in 100 intervals \(\left( x_k,x_{k+1} \right) \). \(X_{-1,n}\) uses \(0 \le k < 100\) and \(X_{0,n}\) considers \(n/2 - 100 \le k < n/2\). In each interval \(\left( x_k,x_{k+1} \right) \) we picked the 200 floating point numbers to the right of \(x_k\) and the 200 floating point numbers to the left of \(x_{k+1}\). The remaining 600 points where equally spaced in \(\left( x_k,x_{k+1} \right) \). The step II errors in Tables 1 and 5 were estimated by performing step II in double precision, step III was evaluated with gcc 4.8.1’s __float128 arithmetic, and the result was compared with the interpolated function evaluated with __float128 arithmetic.

The trial points in Fig. 2 were chosen as follows: for each n we considered the relative errors \(z_k^s\) and \(z_k^r\) for the Salzer’s weights and the numerical weights. We then we picked 4 groups of ten indexes: the indexes corresponding largest values \(y_k z_k^s\), the indexes corresponding to the ten smallest values of \(y_k z_k^s\) and the analogous 20 indexes for \(z_k^r\). We then removed the repeated indexes and obtained a vector with \(n_i\) indexes. For each index \(k > 0\) we picked the 2000 floating points to the left of \(\hat{x}_k\) and for each index \(k < n\) we picked the 2000 floating points to the right of \(\hat{x}_k\). We then considered the \(n_i\) intervals of the form \([x_{k-1},x_{k}]\) for \(k = 1, n/n_i, 2 n / n_i \dots \) and picked 2000 equally spaced points in each of these intervals. The first formula was evaluated at these points as in subsection 3.1 of [8].

The \(b_n\) in Table 3 were computed using __float128 arithmetic, from nodes \(\hat{x}_k\) obtained from the formula \(x_k = {\sin }\left( \frac{2k - n}{2n} \pi \right) \) using IEEE-754 double arithmetic. The versions with 3 bins in Tables 4, 5 and 6 are as in Fig. 6. The versions with 39 bins consider the central bin \([-2^{-10},2^{-10}]\), with base \(b_{20} = 0\), the bins \([-1,2^{-10} - 1)\), \([2^{-k} - 1, 2^{1 - k} - 1)\) for \(2 \le k \le 10\) and \([- 2^{-k}, -2^{-k - 1})\) for \(1 \le k < 9\), with base at their left extreme point. The remaining 19 bins and bases were obtained by reflection around 0. The versions with 79 bins consider the central bin \([-2^{-20},2^{-20}]\), with base \(b_{40} = 0\), the bins \([-1,2^{-20} - 1)\), \([2^{-k} -1, 2^{1 - k}-1)\) for \(1 \le k < 20\) and \([-2^{-k}, -2^{-k-1})\) for \(1 \le k < 19\), with base at their left extreme point. The remaining 39 bins and bases were obtained by reflection.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mascarenhas, W.F., de Camargo, A.P. The effects of rounding errors in the nodes on barycentric interpolation. Numer. Math. 135, 113–141 (2017). https://doi.org/10.1007/s00211-016-0798-x

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-016-0798-x

Mathematics Subject Classification

Navigation