As mentioned in Sect. 3, the ill-conditioning of the RBF interpolation is a well-known challenge. However, RBFs within finite volume methods are of a slightly different nature. In general, the RBF approximation achieves exponential order of convergence for smooth functions by increasing the number of interpolation nodes in a certain domain. The setting for finite volume methods is different since the number of interpolation points remains fixed at a rather low number of nodes and only the fill-distance is reduced.
Based on [11, 12] it is known that the combination of polyharmonics and Gaussians with polynomials overcomes the stagnation error. Bayona [3] shows that under certain assumptions the order of convergence is ensured by the polynomial part.
We propose to use multiquadratic rather than polyharmonic or Gaussian RBFs to enable the use of the smoothness indicator, developed in [20]. Since the RBFs are only used to ensure solvability of the linear system, we can use
$$\begin{aligned} \varepsilon = \frac{1}{\varDelta x}, \end{aligned}$$
(16)
as the shape parameter with the separation distance \(\varDelta x :=\min _{i\ne j}{\Vert \mathbf{x} _i-\mathbf{x} _j\Vert }\) for the interpolation nodes \(\mathbf{x} _1,\dots , \mathbf{x} _n\) with \(n\in \mathbb {N}\). To control the conditioning of the polynomial part we use the basis
$$\begin{aligned} p_i(x) = \tilde{p}_i(\varepsilon (\mathbf{x} -{\tilde{\mathbf{x}}})), \end{aligned}$$
(17)
for \(i = 1,\dots , m\) with \(\tilde{p}_i\in \{\mathbb {R}^d\rightarrow \mathbb {R}, \mathbf{x} \mapsto x_1^{\alpha _1}\dots x_d^{\alpha _d}| \; \sum _{i=1}^d \alpha _i < l, \alpha _i\in \mathbb {N}\}\), \(\text {deg}(\tilde{p}_i) \leqslant \text {deg}(\tilde{p}_{i+1})\) and \({\tilde{\mathbf{x}}}\in \{\mathbf{x }_1\dots ,\mathbf{x }_n\}\). The best choice for \({\tilde{\mathbf{x}}}\) would be the barycenter of the stencil. However, to use the same polynomials for different stencils in the ENO scheme we choose the central one.
Remark 1
The interpolation matrix is the same as the one with the interpolation basis \(\tilde{p}_i\) with \(i=1\dots ,m\), the RBFs with shape parameter 1, and the nodes \(\tilde{\mathbf{x }}_1,\dots , \tilde{\mathbf{x }}_n\) with \(\tilde{\mathbf{x }}_i = \varepsilon (\mathbf{x} _i-\mathbf{x} _1)\). This holds true for any \(\varDelta x \rightarrow 0\) and \(\varDelta \tilde{x} = 1\). Thus, the interpolation step in the finite volume method has the same condition number for all refinements as long as the interpolation nodes have a similar distribution.
Stability Estimate for RBF Coefficients
In this section, we analyze the stability of the RBF interpolation based on (16) and (17) and show that the stability of the RBF coefficients depends only on the number of the interpolation nodes n. Then, for the one-dimensional case we show that the stability of the polynomial coefficients depends on n and the ratio of the maximum distance between the interpolation points Dx and the minimum distance \(\varDelta x\). For higher dimension we conjecture that a similar result holds.
From [28] it follows that
Lemma 1
(Stability estimate [28]) For (11) there holds the stability estimate
$$\begin{aligned} \frac{\Vert \varDelta a\Vert _2}{\Vert a\Vert _2}\leqslant \frac{\lambda _{max}}{\lambda _{min}}\frac{\Vert \varDelta f\Vert _{2}}{\Vert f-P b\Vert _{2}}, \end{aligned}$$
(18)
with \(\lambda _{min} := \inf _{a\ne 0, P^T a = 0} \frac{a^TA a}{a^T a}\) and \(\lambda _{max}\) the maximal eigenvalue. Further, there exists an estimate for the polynomial coefficients
$$\begin{aligned} \frac{\Vert \varDelta b\Vert _2}{\Vert b\Vert _2}&\leqslant \frac{\lambda _{max,P^TP}}{\lambda _{min,P^TP}}\frac{\Vert P^T (\varDelta f - A\varDelta a)\Vert _{2}}{\Vert P^T ( f - A a)\Vert _{2}}, \end{aligned}$$
(19)
$$\begin{aligned}&\leqslant \Big (1+\frac{\lambda _{max}}{\lambda _{min}}\Big ) \frac{\lambda _{max,P^TP}}{\lambda _{min,P^TP}}\frac{\Vert P^T \varDelta f \Vert _{2}}{\Vert P^T ( f - A a)\Vert _{2}}, \end{aligned}$$
(20)
with the maximal and minimal eigenvalue of \(P^TP\), \(\lambda _{max,P^TP}\), \(\lambda _{min,P^TP}\).
Thus, the stability of the method depends on the ratios
$$\begin{aligned} \lambda _{max}/ \lambda _{min} \quad \hbox {and}\quad \lambda _{max,P^TP}/ \lambda _{min,P^TP}. \end{aligned}$$
The maximal eigenvalues can be estimated by
$$\begin{aligned} \lambda _{max} = {\sup _{a\ne 0}} \frac{a^TA a}{a^T a} = \Vert A\Vert _{2}\leqslant \Vert A\Vert _{F} \leqslant n \max _{i,j}|A_{i,j}|. \end{aligned}$$
(21)
Note that \(\lambda _{min}\) is not the smallest eigenvalue of A, but its definition is similar. Schaback [28] established the following lower bound
Lemma 2
(Lower bound of \(\lambda _{min}\) [28]) Given an even conditionally positive definite function \(\phi \) with the positive generalized Fourier transform \(\hat{\phi }\). It holds that
$$\begin{aligned} \lambda _{min} \geqslant \frac{\varphi _0(M)}{2 \varGamma (d/2 + 1)}\Big (\frac{M}{2\sqrt{\pi }}\Big )^d, \end{aligned}$$
(22)
with the function
$$\begin{aligned} \varphi _0(r) := \inf _{\Vert \omega \Vert _2\leqslant 2r}\hat{\phi }(\omega ), \end{aligned}$$
(23)
for \(M>0\) satisfying
$$\begin{aligned} M\geqslant \frac{12}{\varDelta x}\Big (\frac{\pi \varGamma ^2(d/2+1)}{9}\Big )^{1/(d+1)}, \end{aligned}$$
(24)
or
$$\begin{aligned} M\geqslant \frac{6.38 d}{\varDelta x}, \end{aligned}$$
(25)
and with
$$\begin{aligned} \varGamma (x) = \int _0^\infty t^{x-1}\exp (-t)\mathrm {d}t, \qquad {\text {Re}}(x) > 0. \end{aligned}$$
(26)
It remains to estimate \(\varphi _0(M)\) depending on the RBFs. Some estimates for the common examples in Table 1 are
Lemma 3
(Estimate of \(\varphi _0\) for multiquadratics [28]) Let \(\phi \) be the multiquadratic RBF, then
$$\begin{aligned} \varphi _0(M) \geqslant \frac{\pi ^{d/2}\varGamma (d/2+\nu ) M^{-d-2\nu }\exp (-2M/\varepsilon )}{\varGamma (-\nu )}. \end{aligned}$$
(27)
Note that the lower bound of \(\varphi _0\) of Lemma 3 is zero for \(\nu \in \mathbb {N}\).
Lemma 4
(Estimate of \(\varphi _0\) for Gaussians [35]) Let \(\phi \) be the Gaussian RBF, then
$$\begin{aligned} \varphi _0(M) = (2\varepsilon ^2)^{-d/2}\exp (-M^2/\varepsilon ^2). \end{aligned}$$
(28)
Lemma 5
(Estimate of \(\varphi _0\) for polyharmonics [35]) Let \(\phi (r) = (-1)^{k+1}r^{2k}\log (r)\) be a polyharmonic RBF, then
$$\begin{aligned} \varphi _0(M) = (-1)^{k+1}2^{2k-1+d/2}\varGamma (k+d/2)k!(2M)^{-d-2k}. \end{aligned}$$
(29)
Corollary 1
By using the shape parameter (16) we recover
$$\begin{aligned} \frac{\Vert \varDelta a\Vert _2}{\Vert a\Vert _2}\leqslant {C(n,d)\Vert \varDelta f\Vert _2}, \end{aligned}$$
(30)
for all \(\mathbf{x} _1,\dots ,\mathbf{x} _n\), \(n\in \mathbb {N}\) and a constant C(n, d) which depends on the number of interpolation nodes n and the dimension d.
Proof
From Remark 1 we conclude
$$\begin{aligned} a := a(\mathbf{x} _1,\dots ,\mathbf{x} _n) = a(\tilde{\mathbf{x }}_1,\dots ,\tilde{\mathbf{x }}_n) =: \tilde{a}. \end{aligned}$$
(31)
From Lemmas 2 and 3 we obtain
$$\begin{aligned} \frac{\Vert \varDelta a\Vert _2}{\Vert a\Vert _2} = \frac{\Vert \varDelta \tilde{a}\Vert _2}{\Vert \tilde{a}\Vert _2} \leqslant {C(n,d,\varDelta \tilde{x})\Vert \varDelta f\Vert _2 = C(n,d,1)\Vert \varDelta f\Vert _2}, \end{aligned}$$
(32)
with a constant \(C(n,d,\varDelta x)\) which depends on n, d and \(\varDelta x\).
Hence, the stability of the RBF coefficients depends only on the number of interpolation nodes n. This analysis is dimension independent and it remains to estimate the ratio \( \lambda _{max,P^TP}/ \lambda _{min,P^TP}\).
Stability Estimate for Polynomial Coefficients
The analysis of the Gram matrix \(G :=P^T P\in \mathbb {R}^{m\times m}\) is more challenging. For the polynomial basis (17) we have
$$\begin{aligned} G_{ij} = \sum _{l=1}^n p_i(\mathbf{x} _l)p_j(\mathbf{x} _l). \end{aligned}$$
(33)
We note that \(P = \tilde{P}\) where \((\tilde{P})_{i,j} = \tilde{p}_i(\tilde{\mathbf{x }}_j)\) with \(\tilde{\mathbf{x }}_j = \epsilon (\mathbf{x} _j - \mathbf{x} _1)\). In the one-dimensional case, the following estimate of the condition number holds for the Vandermonde matrix.
Lemma 6
(Conditioning of the Vandermonde matrix in one dimension, [13]) Let \(V_n\) be the Vandermonde matrix \((V_n)_{i,j} = z_j^i\) with \(z_i \ne z_j\) for \(i\ne j\) and \(z_j\in \mathbb {C}\). It holds that
$$\begin{aligned} \max _j\prod _{i\ne j}\frac{\max (1,|z_i|)}{|z_j-z_i|}<\Vert V_n^{-1}\Vert _{\infty }\leqslant \max _j\prod _{i\ne j}\frac{1+|z_i|}{|z_j-z_i|}. \end{aligned}$$
(34)
Corollary 2
$$\begin{aligned} \frac{\lambda _{max,P^TP}}{\lambda _{min,P^TP}} \leqslant \bigg (\frac{Dx}{\varDelta x}+1\bigg ) ^2\bigg (\frac{Dx}{\varDelta x}\bigg )^{2n}\frac{n^4}{\big (\left\lfloor n/2-1 \right\rfloor !\big )^4}, \end{aligned}$$
(35)
with \(Dx = \max _{i\ne j} |x_i-x_j|\).
Proof
We start with the estimate of \(\Vert P\Vert _{\infty }\)
$$\begin{aligned} \Vert P\Vert _{\infty }&= \max _{i} \sum _{j=1}^n \Big (\frac{x_i-x_1}{\varDelta x}\Big )^{j-1} \leqslant \max _{i} \sum _{j=1}^n \Big (\frac{Dx}{\varDelta x}\Big )^{j-1} \leqslant \frac{\big (\frac{Dx}{\varDelta x}\big )^n -1}{\frac{Dx}{\varDelta x}-1},\end{aligned}$$
(36)
$$\begin{aligned}&\leqslant n \bigg (\frac{Dx}{\varDelta x}\bigg )^n, \end{aligned}$$
(37)
To estimate the norm of \(P^{-1}\) we use Lemma 6
$$\begin{aligned} \Vert P^{-1}\Vert _{\infty }&\leqslant \max _{i}\prod _{j\ne i}\frac{1 + |\tilde{x}_j|}{|\tilde{x}_i - \tilde{x}_j|} = \max _{i}\prod _{j\ne i}\frac{\varDelta x +| {x}_j-x_1|}{|{x}_i - {x}_j|},\\&\leqslant \max _{i}\prod _{j\ne i}\frac{\varDelta x +Dx}{|j-i|\varDelta x}= \bigg (\frac{Dx}{\varDelta x}+1\bigg )\max _{i}\frac{1}{\prod _{j\ne i}|j-i|},\\&\leqslant \bigg (\frac{Dx}{\varDelta x}+1\bigg )\frac{1}{\prod _{j\ne \left\lfloor n/2 \right\rfloor }|j- \left\lfloor n/2 \right\rfloor |} \leqslant \bigg (\frac{Dx}{\varDelta x}+1\bigg )\frac{1}{\prod _{j<\left\lfloor n/2 \right\rfloor }|j|^2},\\&\leqslant \bigg (\frac{Dx}{\varDelta x}+1\bigg )\frac{1}{ \big (\left\lfloor n/2-1 \right\rfloor !\big )^2}. \end{aligned}$$
Furthermore, we have the standard estimate
$$\begin{aligned} \frac{1}{\sqrt{n}}\Vert A\Vert _{\infty } \leqslant \Vert A\Vert _{2}\leqslant \Vert A\Vert _{\infty }\sqrt{m}, \end{aligned}$$
(38)
for \(A \in \mathbb {R}^{m\times n}\). From [32] we recover
$$\begin{aligned} \hbox {cond}_2 P^T P =( \hbox {cond}_2 P)^2, \end{aligned}$$
(39)
when \(n = m\). Combined, this yields
$$\begin{aligned} \frac{\lambda _{max,P^TP}}{\lambda _{min,P^TP}} = \Vert P^{-1}\Vert _2 ^2\Vert P\Vert _{2}^2 \leqslant n^2 \Vert P^{-1}\Vert _{\infty }^2 \Vert P\Vert _{\infty }^2. \end{aligned}$$
(40)
Applying Corollary 2 to uniformly distributed nodes in \(\mathbb {R}\) we obtain \(Dx/\varDelta x = n-1\) and the condition number of \(P^T P\) is uniformly bounded for all \(\varDelta x\) by
$$\begin{aligned} \frac{\lambda _{max,P^TP}}{\lambda _{min,P^TP}} \leqslant \frac{(n-1)^{n}n^3}{\big (\left\lfloor n/2-1 \right\rfloor !\big )^2}. \end{aligned}$$
(41)
The proof of this estimate does not hold true for two-dimensional interpolation. However, we conjecture that similar bounds hold, as is confirmed in Table 2. Note that the reconstructions from (6) are based on a stencil in a grid. Thus, \(Dx/\varDelta x\) is bounded for these interpolation problems.
Table 2 Comparison of maximum and minimum condition numbers arising for different polynomial degrees and different orders of MQs k Approximation by RBF Interpolation Augmented with Polynomials
Considering ansatz (8) for the interpolation problem (7), (9) Bayona shows in [3], under the assumption of full rank of A and P, that the order of convergence is at least \(\mathcal {O}(h^{l+1})\) based on the polynomial part. With similar techniques we can relax the assumptions of full rank of A by assuming \(\varphi \) to be a conditionally positive definite RBF of order \(l+1\).
Theorem 1
Let f be an analytic multivariate function and \(\varphi \) a conditionally positive definite RBF of order \(l+1\). Further, we assume the existence of a \(\varPi _{l}(\mathbb {R}^d)\)-unisolvent subset of X. It follows
$$\begin{aligned} \Vert s_{f,X} - f\Vert _{\infty } \leqslant \mathcal {O}(h^{l+1}). \end{aligned}$$
(42)
Proof
Let us consider \(x_0\in \mathbb {R}^d\) where \(x_0\) does not have to be a node. By the assumption that f is analytic, it admits a Taylor expansion in a neighborhood of \(\mathbf {x}_0\)
$$\begin{aligned} f(\mathbf {x}) = \sum _{k\geqslant 1} L_k[f(\mathbf {x}_0)] p_k(\mathbf {x}-\mathbf {x}_0), \end{aligned}$$
(43)
with \(L_k[f(\mathbf {x}_0)]\in \mathbb {R}\) the coefficients for f around \(\mathbf {x}_0\), e.g., \(L_k[f(\mathbf {x}_0)] = \frac{1}{k!}f^{(k)}(\mathbf {x}_0)\) for univariate functions. Thus, we recover
$$\begin{aligned} f|_X = (f(\mathbf {x}_i))_{i=1}^n = \sum _{k\geqslant 1}L_k[f(\mathbf {x}_0)] \mathbf {p}_k, \end{aligned}$$
(44)
with \(\mathbf {p}_k =(p_k(\mathbf {x}_i-\mathbf {x}_0))_{i=1}^n.\)
Note that \(\mathbf {a}_k\in \mathbb {R}^n\), \(\mathbf {b}_k\in \mathbb {R}^m\) are given by
$$\begin{aligned} \begin{pmatrix} A &{} \quad P \\ P^T &{} \quad 0 \end{pmatrix} \begin{pmatrix} \mathbf {a}_k\\ \mathbf {b}_k \end{pmatrix} = \begin{pmatrix} \mathbf {p}_k\\ 0 \end{pmatrix}, \end{aligned}$$
(45)
and they satisfy
$$\begin{aligned} \begin{pmatrix} a\\ b \end{pmatrix} = \sum _{k\geqslant 1} L_k[f(\mathbf {x}_0)] \begin{pmatrix} \mathbf {a}_k\\ \mathbf {b}_k \end{pmatrix}. \end{aligned}$$
(46)
Since there exists a \(\varPi _{l}(\mathbb {R}^d)\)-unisolvent subset and by the well-posedness of (45), we have
$$\begin{aligned} a_{k,i} = 0, \qquad b_{k,j} = \delta _{k,j}, \end{aligned}$$
(47)
for \(i = 1,\dots ,n\) and \(j,k = 1,\dots ,m\). This allows us to write the interpolation function as
$$\begin{aligned} s_{f,X}(\mathbf {x})&= \sum _{i=1}^n a_i\phi (\mathbf {x}-\mathbf {x}_i) + \sum _{j=1}^m b_j p_j(\mathbf {x}),\end{aligned}$$
(48)
$$\begin{aligned}&=\sum _{k = 1}^m L_k[f(\mathbf {x}_0)] p_k(\mathbf {x}) + \sum _{k>m} \sum _{i=1}^n L_k[f(\mathbf {x}_0)] a_{k,i}\phi _i(\mathbf {x})\end{aligned}$$
(49)
$$\begin{aligned}&\quad +\, \sum _{k>m} \sum _{l=1}^s L_k[f(\mathbf {x}_0)] b_{k,l}p_l(\mathbf {x}), \end{aligned}$$
(50)
and recover
$$\begin{aligned} f(\mathbf {x}) - s_{f,X}(\mathbf {x})&= \sum _{k>m} L_k[f(\mathbf {x}_0)] p_k(\mathbf {x}) - \sum _{k>m} \sum _{i=1}^n L_k[f(\mathbf {x}_0)] a_{k,i}\phi _i(\mathbf {x})\nonumber \\&\quad -\, \sum _{k>m} \sum _{l=1}^m L_k[f(\mathbf {x}_0)] b_{k,l}p_l(\mathbf {x}) = r_m(\mathbf {x}) - s_{r_m}(\mathbf {x}). \end{aligned}$$
(51)
with \(r_m(\mathbf {x}) = \sum _{k>m} L_k[f(\mathbf {x}_0)] p_k(\mathbf {x})\).
Given the estimate of DeMarchi et al. [8]
$$\begin{aligned} \Vert s_{f,X}\Vert _{\infty } \leqslant C (\Vert f\Vert _{\ell _\infty (X)} + \Vert f\Vert _{\ell _2 (X)}), \end{aligned}$$
(52)
we conclude
$$\begin{aligned} \Vert f-s_{f,X}\Vert _{\infty } = \Vert r_m-s_{r_m,X}\Vert _{\infty } \leqslant C h^{l+1}, \end{aligned}$$
(53)
with \(\Vert r_m\Vert _{\infty } \leqslant C h^{l+1}\). \(\square \)
Numerical Examples
In this section, we seek to verify the results in the finite volume setup (fixed number of interpolation nodes). Let \(\varOmega = [0,1]^2\) and \(f:\varOmega \rightarrow \mathbb {R}\) be a function and \(\delta >0\). We approximate f by dividing the domain into subdomains of size \(\delta \times \delta \) and solve in each subdomain the interpolation problem with N nodes given from an Halton sequence [16]. Since the condition number depends on the maximal distance divided by the separation distance \(Dx/\varDelta x\), we use the Halton sequences with a separation distance bigger than \(0.5\delta /\sqrt{N}\). We test the following functions
$$\begin{aligned} f_1(x,y)&= \sin (2\pi (x^2+2y^2))-\sin (2\pi (2x^2+(y-0.5)^2)),\\ f_2(x,y)&= \exp (-(x-0.5)^2-(y-0.5)^2),\\ f_3(x,y)&= \sin (2x)+\exp (-x),\\ f_4(x,y)&= 1+ \sin (4x)+\cos (3x)+\sin (2y). \end{aligned}$$
In Figs. 2 and 3 we show the error of the interpolation problem and confirm the correct order of convergence for the multiquadratic interpolation augmented with a polynomial of degree l of order \(k \leqslant {l}\). For polynomial degree \({l} = 4\) we observe that the convergence breaks down for \(\delta < 2^{-7}\). This happens at small errors \(\approx 10^{-15}\) and high condition numbers \(>10^{13}\), as is shown in Table 2.
Furthermore, we verify the results from Sect. 4. Table 2 supports the conjecture that the condition number remains constant for a fixed number of interpolation nodes n and a fixed ratio \(Dx/\varDelta x\).
We also observe that the condition number stays constant for the refined grids, and it is considerably smaller for first order multiquadratics \(k=1\) than for the higher order ones.