Advertisement

Journal of Scientific Computing

, Volume 75, Issue 3, pp 1555–1580 | Cite as

Radial Basis Function Methods for the Rosenau Equation and Other Higher Order PDEs

  • Ali Safdari-Vaighani
  • Elisabeth LarssonEmail author
  • Alfa Heryudono
Open Access
Article

Abstract

Meshfree methods based on radial basis function (RBF) approximation are of interest for numerical solution of partial differential equations because they are flexible with respect to the geometry of the computational domain, they can provide high order convergence, they are not more complicated for problems with many space dimensions and they allow for local refinement. The aim of this paper is to show that the solution of the Rosenau equation, as an example of an initial-boundary value problem with multiple boundary conditions, can be implemented using RBF approximation methods. We extend the fictitious point method and the resampling method to work in combination with an RBF collocation method. Both approaches are implemented in one and two space dimensions. The accuracy of the RBF fictitious point method is analyzed partly theoretically and partly numerically. The error estimates indicate that a high order of convergence can be achieved for the Rosenau equation. The numerical experiments show that both methods perform well. In the one-dimensional case, the accuracy of the RBF approaches is compared with that of the corresponding pseudospectral methods, showing similar or slightly better accuracy for the RBF methods. In the two-dimensional case, the Rosenau problem is solved both in a square domain and in an irregular domain with smooth boundary, to illustrate the capability of the RBF-based methods to handle irregular geometries.

Keywords

Collocation method Radial basis function Fictitious point Pseudospectral Resampling Rosenau equation Multiple boundary conditions 

Mathematics Subject Classification

65M70 35G31 

1 Introduction

The Rosenau equation has become an established research subject in the field of mathematical physics since its introduction in the late 80s by Philip Rosenau [31]. The equation is intended to overcome shortcomings of the already famous Korteweg–de Vries (KdV) equation [15] in describing phenomena of solitary wave interaction. Knowledge about this interaction, particularly when two or more wave packets called solitons are colliding with one another, is indispensable in digital transmission through optical fibers. As data carriers, we need solitons that interact “cleanly” in the sense that none of the solitons loose any information, shape, or other conserved quantities, when they pass through each other. One may consult [7] for a fascinating history behind this subject. The Rosenau equation in its general form is given by
$$\begin{aligned} u_t+\alpha (\mathbf {x},t)\varDelta ^2u_{t}=\nabla \cdot g(u),\quad (\mathbf {x},t)\in \varOmega \times (0,\,T], \end{aligned}$$
(1.1)
where \(\varOmega \) is a bounded domain in \(\mathbb {R}^d\) (\(d\le 3\)), the coefficient \(\alpha (x,t)\ge \alpha _0>0\) is bounded in the domain \(\varOmega \times [0,T]\), and the nonlinear function g(u) is polynomial of degree \(q+1\), \(q\ge 1\). Multiple boundary conditions are required at the boundary \(\partial \varOmega \), such as
$$\begin{aligned} u(\mathbf {x},t)&=f_1(\mathbf {x},t),\quad (\mathbf {x},t)\in \partial \varOmega \times (0,\,T], \end{aligned}$$
(1.2)
$$\begin{aligned} \frac{\partial u}{\partial n}(\mathbf {x},t)&=f_2(\mathbf {x},t),\quad (\mathbf {x},t)\in \partial \varOmega \times (0,\,T], \end{aligned}$$
(1.3)
where n is the outward normal direction from the boundary, and we need an initial condition
$$\begin{aligned} u(\mathbf {x},0)=f_0(\mathbf {x}),\quad x\in \varOmega . \end{aligned}$$
The well-posedness of the Rosenau equation has been studied theoretically by Park [27, 28]. Yet in practice, the equation poses difficulties to solve numerically due to the presence of non-linearity, high spatial derivatives, multiple boundary conditions, and mixed time and space derivatives. Numerical studies based on Galerkin formulations can be found in [5, 6, 21, 24], and numerical studies based on finite difference methods are found in [1, 4, 19, 26].

The objective of this paper is to derive numerical methods based on radial basis function (RBF) collocation methods [14, 22] for the Rosenau equation, that can be applied to problems in one, two, and three space dimensions, for non-trivial geometries. These methods will also be applicable to other higher order partial differential equations. We derive and implement a fictitious point RBF (FP–RBF) collocation method and a resampling RBF (RS–RBF) collocation method, and perform experiments in one and two space dimensions. We investigate the accuracy and behavior of the derived methods theoretically and numerically. We also compare the RBF methods with pseudospectral (PS) methods [12, 35] with respect to accuracy in one space dimension.

In this paper we are using global RBF approximation as a test case for implementation of multiple boundary conditions in general geometries. The current direction in the research on RBF approximation methods for PDEs is towards the use of localized RBF approximation methods. The main categories are stencil-based methods (RBF-FD) [2, 11] and partition of unity methods (RBF–PUM) [23, 32]. The (FP–RBF) technique should carry over in both cases, with minor differences in the implementation, whereas the (RS–RBF) method should be applicable to RBF–PUM, but not as easily to RBF-FD.

The outline of the paper is as follows: In Sect. 2, a basic RBF collocation scheme is introduced. Section 3 describes different approaches to handle the multiple boundary conditions. Then in Sect. 4 the theoretical approximation properties of the RBF method for the one-dimensional Rosenau problem are discussed, while the details of the analysis are given in Appendix A. The implementation aspects are discussed in Sect. 5, followed by numerical results in Sect. 6. Finally, Sect. 7 contains conclusions and discussion.

2 The Basic RBF Collocation Scheme

The approaches for handling multiple boundary conditions implemented in this paper are combined with an RBF collocation method. In this section, we introduce the general notation and quantities we need for RBF approximation of (time-dependent) PDEs. We start from given scalar function values \(u(x_j)\) at scattered distinct node locations \(x_j\in \mathbb {R}^d\), \(j=1,\ldots ,N\). We assume that the function is approximated by a standard RBF interpolant
$$\begin{aligned} s(x)=\sum ^{N}_{j=1}\lambda _j\phi (\Vert x-x_j\Vert ), \end{aligned}$$
(2.1)
where \(\Vert \cdot \Vert \) is the Euclidean norm, \(\phi \) is a real-valued function such as the inverse multiquadric \(\phi (r)=\frac{1}{\sqrt{\varepsilon ^2r^2+1}}\). The coefficients \(\lambda _j\in \mathbb {R}\) are determined by the interpolation conditions \(s(x_j)=u(x_j),\) \(j=1,\ldots ,N\). On matrix form we have
$$\begin{aligned} A\bar{\lambda }=\bar{u}, \end{aligned}$$
(2.2)
where the matrix elements \(A_{ij}=\phi (\Vert x_i-x_j\Vert ),~i,j=1,\ldots ,N\), the vector \(\bar{\lambda }=[\lambda _1,\ldots ,\lambda _N]^T\), and \(\bar{u}=[u(x_1),\ldots , u(x_N)]^T\). When solving a PDE, we prefer to work with the discrete function values instead of the coefficients. Using (2.1) and (2.2) together, we see that the interpolant can be written
$$\begin{aligned} s(x)=\bar{\phi }(x)\bar{\lambda }=\bar{\phi }(x)A^{-1}\bar{u}, \end{aligned}$$
(2.3)
where \(\bar{\phi }(x)=[\phi (\Vert x-x_1\Vert ), \ldots , \phi (\Vert x-x_N\Vert )]\), assuming that A is non-singular. This holds for commonly used RBFs such as Gaussians, inverse multiquadrics and multiquadrics [25, 33] for distinct node points \(x_j\). We can furthermore, use (2.3) to identify cardinal basis functions such that we can write the approximation on the FEM like form
$$\begin{aligned} s(x)=\bar{\psi }(x)\bar{u}=\sum ^N_{j=1}\psi _j(x)u(x_j), \end{aligned}$$
(2.4)
where \(\bar{\phi }(x)A^{-1}=\bar{\psi }(x)=[\psi _1(x),\ldots ,\psi _N(x)]\). Because our final target is to solve PDEs, we need to look at how to apply a linear operator \(\mathscr {L}\) to the RBF approximation to compute \(s_{\mathscr {L}}=[\mathscr {L}s(x_1),\ldots , \mathscr {L}s(x_N)]^T\). In cardinal form, we get
$$\begin{aligned} \mathscr {L}{s}(x)=\mathscr {L}\bar{\psi }(x)\bar{u} =\sum ^N_{j=1}\mathscr {L}\psi _j(x)u(x_j). \end{aligned}$$
(2.5)
Using relation (2.3), the differentiation matrix \(\varPsi _\mathscr {L}=\{\mathscr {L}\psi _j(x_i)\}_{i,j=1,\ldots ,N}\) under operator \(\mathscr {L}\) is given by
$$\begin{aligned} \varPsi _\mathscr {L}=\varPhi _\mathscr {L}A^{-1}, \end{aligned}$$
(2.6)
where \(\varPhi _\mathscr {L}=\{\mathscr {L}\phi (\Vert x_i-x_j\Vert )\}_{i,j=1,\ldots ,N}\).
For time-dependent PDE problems, we use the RBF approximation in space and then discretize the time interval. The solution u(xt) is approximated by
$$\begin{aligned} s(x,t)=\sum ^N_{j=1}\psi _j(x)u_j(t), \end{aligned}$$
(2.7)
where \(u_j(t)\approx u(x_j,t)\) are the unknown functions to determine.

3 Dealing with Multiple Boundary Conditions

If we consider the one-dimensional version of equations (1.1)–(1.3) on a symmetric interval \(x\in [-L,\,L]\) we have
$$\begin{aligned} u_t+\alpha (x,t)u_{xxxxt}= g_u(u)u_x,\quad (x,t)\in [-L,\,L]\times (0,\,T], \end{aligned}$$
(3.1)
with boundary conditions
$$\begin{aligned} u(\pm L,t)&=f_1(\pm L,t),\quad t\in (0,\,T], \end{aligned}$$
(3.2)
$$\begin{aligned} u_x(\pm L,t)&=f_2(\pm L,t),\quad t\in (0,\,T]. \end{aligned}$$
(3.3)
Even for the one-dimensional case, how to implement multiple boundary conditions for a time-dependent global collocation problem is not obvious. In our case, we need to enforce two boundary conditions at each end point resulting in a total of four boundary conditions at the two boundary points. Collocating the PDE at all interior node points leads to a situation where we have more equations than node points. That is, unless we accept an overdetermined system, we need to either increase the number of degrees of freedom or discard some of the equations.
In fact, the subtleties of implementing multiple BCs are not tied to RBF methods. They have been actively researched in conjunction with other global collocations methods, particularly pseudospectral methods, since the 70s. We list the following five popular methods:
  1. 1.

    Mixed hard and weak BCs [17]

     
  2. 2.

    Spectral penalty method [16]

     
  3. 3.

    Transforming to lower order system [34]

     
  4. 4.

    Fictitious/ghost point method [13]

     
  5. 5.

    Resampling method [9]

     
In this paper, we only consider methods (3)–(5) as we currently do not have a way to find penalty parameters for methods (1) and (2) that give numerically stable solutions.

3.1 Transforming to Lower Order System

A common approach when solving PDEs containing high order derivatives is to transform them into a system with lower order derivatives. By letting \(w = u_x\), the Rosenau equation (3.1) is transformed into
$$\begin{aligned} u_t + \alpha (x,t)w_{xxxt}&= wg_u(u)\\ w_t - u_{xt}&= 0 \end{aligned}$$
with boundary conditions \(u(\pm L,t)=f_1(\pm L,t)\) and \(w(\pm L,t)=f_2(\pm L,t)\). The advantage of this method is that the Neumann conditions for u at the boundaries become Dirichlet conditions for w. However, the system to solve becomes twice as large, as we need a total of 2N degrees of freedom for u and w. Due to this reason, and especially for global RBFs where differentiation matrices are dense, we are not pursuing this method. However, for localized RBF methods, where differentiation matrices are sparse, this method may still be worth trying.

3.2 Fictitious Point Method

Fictitious or ghost point methods have been commonly used as a way to enforce multiple boundary conditions in finite difference methods. The implementation for global collocation methods such as pseudospectral methods is due to Fornberg [13].

Let \(-L=x_2<x_3<\cdots < x_{N-1}=L\) be distinct node points. The Dirichlet conditions (1.2) can be imposed by fixing the values for \(u(x_2)\) and \(u(x_{N-1})\), but for the Neumann conditions (1.3) we use the fictitious point approach proposed by Fornberg [13], and introduce two additional points at some arbitrary locations denoted by \(x_1\) and \(x_{N}\).

We introduce an RBF approximation s(xt) as in (2.7), extended to include the fictitious points, for the spatial approximation of the solution u(xt),
$$\begin{aligned} s(x,t)=\sum ^{N}_{j=1}\psi _j(x)u_j(t). \end{aligned}$$
(3.4)
Loosely following the fictitious point approach, we will modify this ansatz so that the boundary conditions are fulfilled. Conditions (1.2) are easily fulfilled by replacing \(u_j(t)\) with \(f_1(x_j,t)\) for \(j=2,N-1\). For the conditions (1.3), we need to formally solve a linear system. Define the vectors \(S_f=[u_1(t),\, u_{N}(t)]^T\) with values at the two fictitious points, and \(S_d=[u_3(t),\ldots , u_{N-2}(t)]^T\) containing the approximate solution values at points in the interior of the domain, then we have
$$\begin{aligned} \underbrace{ \left( \begin{array}{cc} \psi _1^\prime (x_2) &{} \psi _{N}^\prime (x_2)\\ \psi _1^\prime (x_{N-1}) &{} \psi _{N}^\prime (x_{N-1}) \end{array} \right) }_{B_f} S_f&+ \underbrace{ \left( \begin{array}{ccc} \psi _3^\prime (x_2) &{} \cdots &{} \psi _{N-2}^\prime (x_2)\\ \psi _3^\prime (x_{N-1}) &{} \cdots &{} \psi _{N-2}^\prime (x_{N-1}) \end{array} \right) }_{B_d} S_d \nonumber \\&+\underbrace{ \left( \begin{array}{cc} \psi _2^\prime (x_2) &{} \psi _{N-1}^\prime (x_2)\\ \psi _2^\prime (x_{N-1}) &{} \psi _{N-1}^\prime (x_{N-1})\\ \end{array} \right) }_{B_b} F_1(t) = F_2(t), \end{aligned}$$
(3.5)
where \(F_j(t)=[f_j(x_2,t),\,f_j(x_{N-1},t)]^T\). Inserting the boundary values \(F_1(t)\) and the expression we get for \(S_f\) by solving (3.5) into (3.4) leads to
$$\begin{aligned} s(x,t)&= \left( [\psi _3(x),\ldots ,\psi _{N-2}(x)]-[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}B_d\right) \,S_d \nonumber \\&\quad +\,\left( [\psi _2(x),\, \psi _{N-1}(x)]-[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}B_b \right) F_1(t)\nonumber \\&\quad +\,[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}F_2(t). \end{aligned}$$
(3.6)
This expression is awkward to work with directly. We introduce the shorthand notation
$$\begin{aligned} s(x,t)= \sum ^{N-2}_{j=3}\tilde{\psi }_j(x)u_j(t)+F(x,t), \end{aligned}$$
(3.7)
where \(\tilde{\psi }_j(x)\) and F(xt) can be directly identified from (3.6). In this simple two point boundary case, we can actually derive the explicit form of the modified basis for illustration. This yields
$$\begin{aligned} \tilde{\psi }_j(x)&=\psi _j(x)- \frac{\psi _{N}^\prime (x_{N-1}) \psi _{j}^\prime (x_2)- \psi _{N}^\prime (x_2) \psi _{j}^\prime (x_{N-1}) }{ \psi _{N}^\prime (x_{N-1})\psi _1^\prime (x_2)- \psi _{N}^\prime (x_2) \psi _1^\prime (x_{N-1}) }\psi _1(x)\nonumber \\&\quad + \frac{ \psi _{1}^\prime (x_{N-1}) \psi _{j}^\prime (x_2)- \psi _{1}^\prime (x_2) \psi _{j}^\prime (x_{N-1}) }{ \psi _{N}^\prime (x_{N-1}) \psi _1^\prime (x_2) - \psi _{N}^\prime (x_2) \psi _1^\prime (x_{N-1}) } \psi _{N}(x). \end{aligned}$$
(3.8)
In order to use the RBF approximation (3.7) for a PDE problem, we need to compute the effect of applying a spatial differential operator \(\mathscr {L}\) at the interior node points. That is, we need a method to evaluate \(\mathscr {L}\tilde{\psi }_j(x_i)\), \(i,j=3,\ldots ,N-2\), and \(\mathscr {L}F(x_i,t)\), \(i=3,\ldots ,N-2\). This is done in two steps. First, we use (2.6) to compute \(\varPsi _\mathscr {L}\) for interior node points \(x_i\), \(i=3,\ldots ,N-2\). Note however that we include all basis functions \(\psi _j(x)\), \(j=1,\ldots ,N\). Then we extract the columns pertaining to the fictitious points into \(\varPsi _{\mathscr {L},f}\), the columns pertaining to the boundary points into \(\varPsi _{\mathscr {L},b}\), and the remaining columns into \(\varPsi _{\mathscr {L},d}\). Then the modified differentiation matrix and the contribution in the forcing function can be computed as
$$\begin{aligned} \tilde{\varPsi }_{\mathscr {L}}= & {} \varPsi _{\mathscr {L},d}-\varPsi _{\mathscr {L},f}B_f^{-1}B_d, \end{aligned}$$
(3.9)
$$\begin{aligned} {}[F_{\mathscr {L}}(x_3,t),\ldots ,F_{\mathscr {L}}(x_{N-2},t)]^T= & {} (\varPsi _{\mathscr {L},b}-\varPsi _{\mathscr {L},f}B_f^{-1}B_b)F_1(t)+\varPsi _{\mathscr {L},f}B_f^{-1}F_2(t). \nonumber \\ \end{aligned}$$
(3.10)
Note that from (2.6) and the definitions above, if no operator is applied, we have \(\varPsi _{I,d}=I\) and \(\varPsi _{I,f}=\varPsi _{I,b}=0\) leading to \(F(x_i,t)=F_t(x_i,t)=0\). We use this to simplify all subsequent expressions where the RBF approximation (3.7) or its time derivative are evaluated at the node points.
Collocating the modified RBF approximation (3.7) with the PDE (1.1) at the node points leads to the system of ODEs
$$\begin{aligned} u_i^\prime (t)&+\sum _{j=3}^{N-2}\alpha (x_i,t)\frac{d^4 \tilde{\psi }_j}{d x^4}(x_i)u_j^\prime (t)=\sum _{j=3}^{N-2}g_u(u_i(t))\frac{d \tilde{\psi }_j}{d x}(x_i)u_j(t) \nonumber \\&-\alpha (x_i,t) F_{xxxxt}(x_i,t)+g_u(u_i(t))F_x(x_i,t), \quad i=3,\ldots ,N-2. \end{aligned}$$
(3.11)
In matrix form, we get the method of lines formulation
$$\begin{aligned} \underbrace{(I+A_\alpha (t)\tilde{\varPsi }_{xxxx})}_{Q(t)}S_d^\prime =\underbrace{G_u(S_d)\tilde{\varPsi }_x}_{D(S_d)}S_d+\underbrace{G_u(S_d)F_x(t) - A_\alpha (t)F_{xxxx}^\prime (t)}_{F(S_d,t)}, \end{aligned}$$
(3.12)
where the diagonal coefficient matrices are
$$\begin{aligned} A_\alpha (t)= & {} \mathrm {diag}(\alpha (x_3,t),\ldots ,\alpha (x_{N-2},t)),\\ G_u(S_d)= & {} \mathrm {diag}(g_u(u_3(t)),\ldots ,g_u(u_{N-2}(t))), \end{aligned}$$
and the vectors in the right hand side are defined as \(F_\mathscr {L}(t)=[F_\mathscr {L}(x_3,t),\ldots ,F_\mathscr {L}(x_{N-2},t)]^T\). The problem (3.12) can be solved by employing a solution method for nonlinear ODE systems.

The mass matrix Q(t) is in general invertible but non-singularity cannot be guaranteed. Kansa [20] argued that if the centers of the RBFs are distinct and the PDE problem is well-posed, matrices discretizing spatial operators are generally found to be non-singular. Hon and Schaback [18] showed that occurrences of singular matrices are very rare, but do exist. When \(\alpha (x,t)\) is constant, the mass matrix is constant over time. Then we can LU-factorize the matrix once and use this factorization throughout the time stepping algorithm.

An alternative to the derivation above is to use the original cardinal basis functions, and include the boundary condition equations in the final system. Define rectangular identity matrices \(I_k\) such that \(I_k(S_d,\,S_b,\,S_f)^T=S_k\), for \(k=d,b,f\). Also, we order the columns in the differentiation matrices in accordance with the order of the unknowns such that \(\varPsi _\mathscr {L}=[\varPsi _{\mathscr {L},d},\,\varPsi _{\mathscr {L},b},\varPsi _{\mathscr {L},f}]\). Then we can express (3.12) as
$$\begin{aligned} \left( \begin{array}{c} I_d+A_{\alpha }\varPsi _{xxxx} \\ 0\\ 0 \end{array}\right) \left( \begin{array}{c} S_d^\prime \\ S_b^\prime \\ S_f^\prime \end{array}\right) = \left( \begin{array}{c} G_u(S_d)\varPsi _x\\ I_b\\ B_d\ \ B_b\ \ B_f \end{array}\right) \left( \begin{array}{c} S_d\\ S_b\\ S_f \end{array}\right) - \left( \begin{array}{c} 0\\ F_1(t)\\ F_2(t) \end{array}\right) . \end{aligned}$$
(3.13)
In this case, the mass matrix is singular and then a differential algebraic solvers is required to compute the solution of the resulting system of differential algebraic equations (DAE).

3.3 Resampling Method

In the resampling method, we do not add any points as for the fictitious point method of the previous section. The four boundary conditions are still enforced at the boundary points as algebraic equations, but the PDE is instead collocated at \(N-4\) auxiliary interior points. Let the solution u(xt) be approximated in Lagrange form by
$$\begin{aligned} s(x,t)=\sum ^{N}_{j=1}\psi _j(x)u_j(t), \end{aligned}$$
(3.14)
where \(x_1\) and \(x_N\) are boundary points and \(x_2,\ldots ,x_{N-1}\) are interior points. The four algebraic equations arising from the boundary conditions are
$$\begin{aligned} u_1(t) = f_1(x_1,t),&\quad u_N(t) = f_1(x_N,t), \end{aligned}$$
(3.15)
$$\begin{aligned} \sum _{j=1}^{N}\frac{d\psi _j}{dx}({x}_1)u_j(t) = f_2(x_1,t),&\quad \sum _{j=1}^{N}\frac{d\psi _j}{dx}({x}_N)u_j(t) = f_2(x_N,t). \end{aligned}$$
(3.16)
To write the equations on matrix form, we again divide the unknown functions into parts, \(S_d\) for interior node points and \(S_b\) for boundary node points. Then the boundary conditions can be expressed as
$$\begin{aligned} \left( \begin{array}{cc} 0 &{} I_b\\ \tilde{B}_d &{} \tilde{B}_b \end{array}\right) \left( \begin{array}{c} S_d\\ S_b\\ \end{array}\right) = \left( \begin{array}{c} F_1(t)\\ F_2(t) \end{array}\right) , \end{aligned}$$
(3.17)
where the boundary condition matrices \(\tilde{B}_d\) and \(\tilde{B}_b\) are analogous to those in the previous section, but with slightly different basis functions as there are no fictitious points. We have N unknown functions, and we have used four equations for the boundary conditions. This means that we need \(N-4\) additional equations. The idea in the resampling method is to collocate the PDE at \(N-4\) auxiliary interior points, instead of collocating at the node points. We define the auxiliary points \(\tilde{x}_i\), \(i=1,\ldots ,N-4\) and collocate the PDE to get the equations
$$\begin{aligned} \sum _{j=1}^{N}\left[ \psi _j(\tilde{x}_i)+\alpha (\tilde{x}_i,t)\frac{d^4 \psi _j}{d x^4}(\tilde{x}_i)\right] u_j^\prime (t)=\sum _{j=1}^{N}\left[ g_u\left( \sum _{k=1}^{N}\psi _k(\tilde{x}_i)u_k(t)\right) \frac{d \psi _j}{d x}(\tilde{x}_i)\right] u_j(t), \end{aligned}$$
(3.18)
Define the resampling matrices \(\varPsi _\mathscr {L}^R=\{\mathscr {L}\psi _j(\tilde{x}_i)\}_{i=1,\ldots ,N-4,\, j=2,\ldots ,N-1,1,N}\) (columns ordered according to the unknowns). The resampled equations (3.18), together with the algebraic equations (3.17), yield an \(N \times N\) DAE of index 1,where \(\tilde{A}_{\alpha }=\mathrm {diag}(\alpha (\tilde{x}_1,t),\ldots ,\alpha (\tilde{x}_{N-4},t) )\) and \(\tilde{S}=\varPsi ^R\left( \begin{array}{c} S_d\\ S_b\\ \end{array}\right) \).

The system of equations (3.19) can be solved using a differential algebraic solver. See DASPK [3, 29]. An example of how this can be implemented in MATLAB is given in Sect. 5.

3.4 Generalization to More Space Dimensions

The main differences when moving to more than one space dimensions is that we have a boundary curve or a boundary surface that is discretized in the same way as the interior of the domain instead of just two boundary points. The formulation of the two methods is in all essential parts the same, and the formulations (3.13) and (3.19) are valid in the same form, but when we before had two boundary points and two fictitious points, we instead have \(N_b\) boundary points and \(N_b\) fictitious points. Similarly, for the resampling method, we have \(2N_b\) boundary conditions, and therefore we collocate the PDE at \(N-2N_b\) auxiliary points. Experiments for problems in two space dimensions are presented in Sect. 6.

4 Error Estimates

We have derived semi-discrete error estimates for the one-dimensional Rosenau problem and the FP–RBF approach. The details of the analysis are provided in Appendix A. Exponential convergence estimates for the spatial part of the error are based on theory for global RBF interpolants [30], and expressed in terms of the fill distance
$$\begin{aligned} h=\sup _{x\in \varOmega } \min _{x_j\in X}\Vert x-x_j\Vert . \end{aligned}$$
(4.1)
The estimates for the error growth in time are more problematic. The global error estimate is given by
$$\begin{aligned} \Vert E(t)\Vert _{\infty }\le C(t) e^{-\frac{\gamma }{\sqrt{h}}}e^{C_3(t)} \max _{0\le \tau \le t}\Vert u(\tau )\Vert _{\mathscr {N}(\varOmega )}, \end{aligned}$$
(4.2)
where the norm \(\Vert \cdot \Vert _{\mathscr {N}(\varOmega )}\) is the so called native space norm connected with the chosen type of RBF. This estimate would be fine if the function \(C_3(t)\) was small. This part of the estimate is connected with the non-linear term and has the form
$$\begin{aligned} C_3(t) =\tilde{C}_q\Vert Q^{-1}\Vert \max (\Vert \tilde{\varPsi }_x\Vert ,\Vert B_x\Vert )r(t), \end{aligned}$$
(4.3)
where the function r(t) represents a polynomial growth factor in time. The function \(C_3(t)\) becomes large due to the matrix norms. When estimated separately like this, the product \(\Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \) becomes large, growing as \(\mathscr {O}(h^{-1})\) (cf. Sect. A.5). The way we have derived the estimates does not easily allow for an estimate that takes the norm of the product \(\Vert Q^{-1}\tilde{\varPsi }_x\Vert \) instead. However, this is in principle the way the matrices appear in the critical terms, and if the product norm is investigated numerically, it turns out to be small.

Because of this overestimate of the time growth, the error estimates are not quantitatively useful, but they provide a qualitative insight into how the different errors interact, and illustrate the difficulties of bounding the non-linear terms in an effective way.

5 MATLAB Implementation

In this section, sample MATLAB implementations of the fictitious point and resampling RBF methods for the one-dimensional Rosenau equation (3.1)–(3.3) are presented and discussed. We use an example with a known solution. For \(g(u)=10u^3-12u^5-\frac{3}{2}u\) and \(\alpha (x,t)=0.5\) it holds that \(u(x,t)= sech (x-t)\) is a solution [28]. For both methods, equally spaced nodes are used, and the spatial domain is \([-L,L]\).

5.1 Implementation of the Fictitious Point Method

Following the idea of the fictitious point method in Sect. 3.2, we complement the interior and boundary RBF nodes with two (the number of boundary nodes) fictitious points outside the computational domain, see Fig. 1 for an illustration.
Fig. 1

An example of a node distribution for the fictitious point method in the one-dimensional case

We generate the differentiation matrices using the modified basis functions according to Equation (3.9). Collocating the Rosenau equation by applying the modified differentiation matrices leads to ODE system (3.12), which we here solve by using the built-in MATLAB ODE solver ode15s. The two functions below constitute a complete MATLAB implementation of the problem.

It can be noted that when we use the modified basis functions, we need to provide the time derivatives of the boundary conditions as well as the boundary conditions themselves. This is not needed with the alternative formulation (3.13), but instead the system is stated in DAE form.

5.2 Implementation of the Resampling RBF Method

For the resampling method, we start directly from the DAE form derived in Sect. 3.3, where the four boundary conditions enforced at the boundary points constitute the algebraic part. The \(N-4\) auxiliary points where the PDE is enforced are uniformly distributed in the interior of the computational domain and do not in general coincide with the RBF node points where the solution is sought. We organize the solution vector as \(S=[S_d ~S_b]^T\), where as before, \(S_d\) contains solution values at the interior RBF nodes, and \(S_b\) contains the two boundary values. Then the DAE discretization scheme can be schematically be displayed aswhere \(\varPsi ^R\) is an \(N-4 \times N\) resampling matrix that provides values at the auxiliary points given values at the node points, \(\tilde{S}=\varPsi ^RS\), \(G_u(\tilde{S})\) is an \((N-4) \times (N-4)\) diagonal matrix, \(\varPsi ^R_{x}\) and \(\varPsi ^R_{xxxx}\) are \(N-4 \times N\) resampled first and fourth derivative matrices respectively. The derivative boundary condition matrix \(\tilde{B}=(\tilde{B}_d~~\tilde{B}_b)\) is a \(2\times N\) matrix, see Eq. (3.17) for details.
Both ODEs and DAEs of index 1 can be solved in MATLAB using ode15s, previously employed for the fictitious point method. One may also use the syntactically similar open source software OCTAVE and there use dasspk as DAE solver. The following two functions show the MATLAB implementation of the resampling RBF method.

6 Numerical Results

In this section, we perform numerical experiments to investigate the accuracy and convergence of the proposed schemes. Both one-dimensional and two-dimensional test cases are considered. In all tests, the inverse multiquadric RBF is used. The shape parameter values are chosen using a parametric relation such that they fall within the region where the method is stable, while making the solution as accurate as possible. For the one-dimensional test case, we compare the results with those of a pseudo-spectral resampling (RS–PS) and a pseudo-spectral fictitious point (FP–PS) method. We have not included the code here, but it can be downloaded from the authors’ web pages.

6.1 The One-Dimensional Case

We consider the same test problem as in the sample implementations in Sect. 5, with \(g(u)=10u^3-12u^5-\frac{3}{2}u\) and known exact solution \(u(x,t)=\text {sech}(x-t)\). Equispaced node distributions over the interval \([-L,\,L]\) are used for the RBF methods. The total number of points N includes the fictitious points in the FP–RBF case. The initial and boundary conditions are taken from the exact solution.

The exact solution over the interval \([-L,\,L]\) for \(L=1\) and \(L=10\) is shown at different times t together with the numerical solution using the FP–RBF method in Fig. 2. The solution is a pulse that travels to the right with time.
Fig. 2

Exact and numerical solutions based on the FP–RBF method with \(L=1\) and \(N=30\) (left) and \(L=10\) and \(N=100\) (right) with \(\varepsilon ^{}=\frac{0.08}{h}\)

In Fig. 2, a shape parameter \(\varepsilon ^{}=\frac{0.08}{h}\) is used. This relation is experimentally determined to ensure stable computations and high solution accuracy. Figure 3 shows how the errors of the two RBF methods vary with \(\varepsilon ^{}\). Using the formula leads to \(\varepsilon ^{}=1.16\) and \(\varepsilon ^{}=0.4\) for the two cases, which is within the best region for each method. It can be noted that the good range of \(\varepsilon ^{}\) is narrower for the resampling method and that both methods rapidly become ill-conditioned when \(\varepsilon ^{}\) is too small.
Fig. 3

The \(L_{\infty }\) error at time \(t=1\) as a function of the shape parameter \(\varepsilon ^{}\) for \(L=1\) and \(N=30\) (left) and \(L=10\) and \(N=100\) (right)

To illustrate the capability of the proposed methods, we start with comparing the approximation accuracy with that of the corresponding pseudo-spectral methods. The pseudo-spectral methods employ the same number of Chebyshev nodes as the number of uniform nodes used by the RBF methods. For a description of the implementations, see [9, 13]. The absolute errors for two different values of L are plotted in Fig. 4. As shown in the figure, the errors of both the RBF based methods and the pseudo-spectral methods are similar in magnitude. For the shorter interval, the RS–PS method has smaller errors near the boundaries, which is consistent with the clustering of the Chebyshev nodes. However, for the larger interval, where the solution is small at the boundary, this effect is not visible.
Fig. 4

Absolute error of the FP–RBF method, the RS–RBF method, the FP–PS and the RS–PS method at time \(t=1\) for \(L=1\) and \(N=30\) (left) and for \(L=10\), \(N=100\) (right). For the RBF methods \(\varepsilon ^{}=\frac{0.08}{h}\) was used

The \(L_{\infty }\) errors over time for the approximated solutions are illustrated in Fig. 5. If we consider the global error estimate (A.41), and insert \(q=4\) (for this test case), the exponential time-dependent growth factor becomes \(\exp (C_3((1+t)^{1.12}-1)/1.12)\). We do not know the precise value of \(C_3\), but based on our experiments a value larger than one should be expected, in which case the predicted growth would be at least two orders of magnitudes larger than what is observed. However, as discussed in Sect. 4, this is expected to be an overestimate of the true error growth. Both the accuracy and the growth rate of the errors of the four different solutions are similar. For the shorter interval \(L=1\), the RS–RBF method is slightly worse than the other three methods, while for the longer interval \(L=10\), both RBF methods are slightly more accurate than the pseudo-spectral methods for longer times.
Fig. 5

The \(L_{\infty }\) error as a function of time t for the FP–RBF method, the RS–RBF method, the FP–PS method and RS–PS method. Results are shown for \(L=1\) and \(N=30\) (left) and \(L=10\) and \(N=100\) (right). Both RBF methods use \(\varepsilon ^{}=\frac{0.08}{h}\) and a uniform node distribution, while the PS methods employs a Chebyshev node distribution

Figure 6 displays the convergence behavior as a function of N for the two RBF methods compared with the PS methods. For all four methods, the highest attainable accuracy is almost the same. When \(\varepsilon ^{}h\) is constant, as in this experiment, we would expect the error to reach a saturation level, but accuracy is also limited by conditioning, and this may be the effect that we see here. In both cases, the FP–RBF method reaches the highest accuracy faster than the RS–RBF method. The PS methods performs best for the short interval, and performs worse than the RBF methods for the longer interval. One explanation for this can be that the node density for the Chebyshev nodes compared with the uniform nodes is lower in the interesting region (middle of the domain) in this case.
Fig. 6

The \(L_{\infty }\) error at time \(t=1\) as a function of the number of node points N for the FP–RBF method, the RS–RBF method, the FP–PS, and the RS–PS method. Results are shown for \(L=1\) and \(\varepsilon ^{}=1\) (left) and \(L=10\) and \(\varepsilon ^{}=0.5\) (right). Both RBF methods use uniform node distributions, while the PS method employs Chebyshev node distributions

6.2 Two-Dimensional Square Domain

In this section, we demonstrate how the flexibility of the RBF approximations allows us to implement the FP–RBF method and RS–RBF method in a two-dimensional domain with a similar effort as for the one-dimensional problem. Here, we do not compare with the PS method, which is less straightforward to implement. We consider the square domain \(\varOmega =[-L,\, L]\times [-L,\, L]\) and the Rosenau equation (1.1) with \(\alpha =1\), \(g(u)=u^3+u^2\) and initial condition \(f_0(x,y)=\text {sech} (x+y)\) and boundary conditions
$$\begin{aligned} f_1(x,y,t)&=\text {sech}(x+y-t), \end{aligned}$$
(6.1)
$$\begin{aligned} f_2(x,y,t)&=-\text {sech}(x+y-t)\text {tanh}(x+y-t), \end{aligned}$$
(6.2)
where the derivative in the second condition is taken as either \(u_x\) or \(u_y\) depending on which part of the boundary is involved. For the two-dimensional test cases, we do not have any analytical solutions. The approximate solution at two different times is displayed in Fig. 7.
Fig. 7

Resampling RBF approximate solution in the square domain \(\varOmega =[-2,\,2]^2\) at time \(t=1\) (left) and \(t=3\) (right) with \(n=27\) points and shape parameter \(\varepsilon ^{}=0.9\)

We start from a uniform discretization of the domain \(\varOmega \) with \(n^2\) points. We denote the number of interior points by \(N_d\) and the number of boundary points by \(N_b\). For FP–RBF we need to add \(N_b\) fictitious points outside the domain to enforce the Neumann boundary conditions. Note that if we simply choose an extension of the uniform grid, the resulting number of fictitious points is too large. The total number of points becomes \(N=N_d+2N_b=(n-2)^2+2(4n-4)=(n+2)^2-8\). For the RS–RBF method we generate \(N-2N_b\) auxiliary points inside of the domain. Note that here the number of boundary points is modified (and these are therefore not on the uniform grid) to make the total and auxiliary node numbers compatible. If we choose \(N_b=4(n-2)-4\), then the number of auxiliary points is \((n-4)^2\). The total number of node points is \(N=N_d+N_b=(n-2)^2+4n-12=n^2-8\). Sample node distributions for the two methods are displayed in Fig. 8.
Fig. 8

A node distribution for the FP–RBF method, where the fictitious points are uniformly distributed outside the domain (left) and a node distribution for the RS–RBF method with auxiliary points displayed in black as \(*\) (right) for a square computational domain

According to the error estimate (A.41) for the FP–RBF method, we expect exponential convergence in \(1/\sqrt{h}\) for fixed \(\varepsilon ^{}\). In practice, we often observe exponential convergence in 1 / h. In Fig. 9, to make a fair comparison, we plot the error as a function of \(\sqrt{N}\propto 1/h\). For this range of N-values, the conditioning is low enough to not influence the error, and exponential convergence can be observed for both RBF methods. We see that the estimated slopes in the right subfigure are precisely double those in the left subfigures. If we take into account that h is also twice as large for the case \(L=2\), we can conclude that the rate of convergence in terms of 1 / h is the same in both cases.
Fig. 9

The error at time \(t=1\) against the square root of the number of points \(\sqrt{N}\) for \(L=1\) and \(\varepsilon ^{}=1.6\) (left) and \(L=2\) and \(\varepsilon ^{}=0.9\) (right). In both cases, errors are computed against a reference solution computed with the FP–RBF method for \(n=30,~(N=1016)\) (left), \(n=32,~(N=1148)\) (right), the error is evaluated at \(25\times 25\) interior points

6.3 Two-Dimensional Irregular Domain with Smooth Boundary

We now take a step further in demonstrating the flexibility of the RBF based methods by considering an irregular two-dimensional domain. As a test problem, we consider the domain with boundary defined by the parametric equation
$$\begin{aligned} r(\theta )=1+0.06(\sin (6\theta )+\sin (3\theta )),\quad \theta \in [0,2\pi ). \end{aligned}$$
(6.3)
We also need the derivative of the boundary equation in order to compute the outward normal direction \(n=(n_x,n_y)\), which is needed for the boundary conditions. We have
$$\begin{aligned} (n_x,n_y) = \frac{r^\prime (\theta )(\sin (\theta ),-\cos (\theta )) + r(\theta )(\cos (\theta ),\sin (\theta ))}{\sqrt{r^\prime (\theta )^2+r(\theta )^2}}. \end{aligned}$$
(6.4)
We use a similar test problem as for the square domain with boundary Dirichlet data
$$\begin{aligned} f_1(x,y,t)=\mathrm {sech}(x+y-t). \end{aligned}$$
(6.5)
For the normal derivative condition, we impose
$$\begin{aligned} f_2(x,y,t)=\nabla u\cdot n=-\text {sech}(x+y-t)\text {tanh}(x+y-t)(n_x+n_y). \end{aligned}$$
(6.6)
The approximate solution at three different times is shown in Fig. 10.
Fig. 10

Resampling RBF approximate solution in the irregular domain with \(N_d=386\) interior points, \(N_b=70\) boundary points using \(\varepsilon ^{}=1.3\) for \(t=0.5\) (left), \(t=1\) (middle), and \(t=2\) (right)

Sample node distributions for the FP–RBF and RS–RBF methods are illustrated in Fig. 11. Just as for the square domain, N is the total number of points, where the number of fictitious points outside the domain is \(N_b\), i.e., \(N=N_d+2N_b\), and the number of auxiliary points inside the domain in the resampling method is \(N_d-2N_b\).
Fig. 11

Sample node distributions with \(N_d=386\) and \(N_b=70\) for the irregular domain for the FP–RBF method (left) and the RS–RBF method (right) with auxiliary points in black marked with \(*\)

The max error as function of \(\sqrt{N}\) for a fixed shape parameter value is illustrated in Fig. 12. The reference solution is computed using the FP–RBF method with \(N_d=547\), and \(N_b=84\) nodes. The max error is estimated from evaluation at 540 radially uniformly distributed points in the domain. The RS–RBF method is less accurate in this case even if it converges with a similar rate. It has also been observed in other experiments in this section that it is more sensitive to parameter values and method parameters.
Fig. 12

Error at time \(t=1\) as a function of the square root of the number of points \(\sqrt{N}\) where \(\varepsilon ^{}=1.3\) for RS–RBF and \(\varepsilon ^{}=1.5\) for FP–RBF. The reference solution is produced using the fictitious point method with \(N=715\approx 26.74^2\)

Overall, the error trends are similar to those for the square domain, showing that the RBF methods provide a well functioning generalization of both the fictitious point method and the resampling method to general domains.

7 Conclusion

The Rosenau equation, which is used as an application throughout this paper, is an example of a non-linear PDE with multiple boundary conditions as well as mixed space-time derivatives. Multiple boundary conditions provides an extra challenge when solving PDE problems. The standard form of a typical collocation method assumes that one condition is imposed at each node point/grid point. Hence, the additional conditions at the boundary nodes lead to a mismatch between the number of conditions and the number of unknowns.

Two approaches to manage multiple boundary conditions that have been introduced for spectral methods are fictitious point methods and resampling methods. In this paper we have shown how to implement these approaches in the context of RBF collocation methods. From numerical experiments for a one-dimensional test problem, we could see that the behavior of the method with respect to accuracy in space and time is very similar to that of the corresponding pseudo-spectral method.

For two-dimensional problems, already in a regular geometry such as the square, the application of spectral methods becomes more complicated. Approximations are typically based on tensor product grids, but if we use the one-dimensional extension techniques for each grid line, again the number of extra conditions and extra points do not naturally match. The problem can for example be resolved by choosing one of the directions for the corner points, but then the approximations in the other direction needs to be of lower order.

We show that with the two RBF methods, due to the freedom of node placement, we can distribute the fictitious points or the resampled nodes uniformly and symmetrically with respect to the domain. Furthermore, we show that the concept can be transferred also to irregularly shaped domains.

We have also analyzed the theoretical properties of the fictitious point RBF approximation for the one-dimensional Rosenau equation. We could show that the spectral convergence of the spatial approximation carries over to the PDE solution, while the growth of the error in time in our estimate strongly depends on the bounds on the non-linear term.

To conclude, both the implemented approaches are promising for problems with multiple boundary conditions, especially for geometries where spectral methods cannot easily be applied. Global RBF approximations as the ones used here are competitive for problems in one or two space dimensions, but the computational cost can become prohibitive for higher-dimensional problems due to the need to solve dense linear systems. Therefore, an interesting future direction is to see how resampling and fictitious point methods can be combined with localized (stencil or partition based) RBF methods.

Supplementary material

References

  1. 1.
    Atouani, N., Omrani, K.: A new conservative high-order accurate difference scheme for the Rosenau equation. Appl. Anal. 94(12), 2435–2455 (2015).  https://doi.org/10.1080/00036811.2014.987134 MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bayona, V., Flyer, N., Fornberg, B., Barnett, G.A.: On the role of polynomials in RBF-FD approximations: II. Numerical solution of elliptic PDEs. J. Comput. Phys. 332, 257–273 (2017).  https://doi.org/10.1016/j.jcp.2016.12.008 MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Brown, P.N., Hindmarsh, A.C., Petzold, L.R.: Consistent initial condition calculation for differential-algebraic systems. SIAM J. Sci. Comput. 19(5), 1495–1512 (1998).  https://doi.org/10.1137/S1064827595289996. (electronic)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Chung, S.K.: Finite difference approximate solutions for the Rosenau equation. Appl. Anal. 69(1–2), 149–156 (1998).  https://doi.org/10.1080/00036819808840652 MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Chung, S.K., Ha, S.N.: Finite element Galerkin solutions for the Rosenau equation. Appl. Anal. 54(1–2), 39–56 (1994).  https://doi.org/10.1080/00036819408840267 MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Chung, S.K., Pani, A.K.: Numerical methods for the Rosenau equation. Appl. Anal. 77(3–4), 351–369 (2001).  https://doi.org/10.1080/00036810108840914 MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cipra, B.: What’s Happening in the Mathematical Sciences, vol. 2. American Mathematical Society, East Providence, Rhode Island (1994)zbMATHGoogle Scholar
  8. 8.
    Dragomir, S.S.: Some Gronwall Type Inequalities and Applications. Nova Science Pub Incorporated, Hauppauge, NY (2003)zbMATHGoogle Scholar
  9. 9.
    Driscoll, T.A., Hale, N.: Rectangular spectral collocation. IMA J. Numer. Anal. (2015).  https://doi.org/10.1093/imanum/dru062.
  10. 10.
    Fasshauer, G.E.: Meshfree approximation methods with MATLAB, Interdisciplinary Mathematical Sciences, vol. 6. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2007). With 1 CD-ROM (Windows, Macintosh and UNIX)Google Scholar
  11. 11.
    Flyer, N., Fornberg, B., Bayona, V., Barnett, G.A.: On the role of polynomials in RBF-FD approximations: I. Interpolation and accuracy. J. Comput. Phys. 321, 21–38 (2016).  https://doi.org/10.1016/j.jcp.2016.05.026 MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Fornberg, B.: A Practical Guide to Pseudospectral Methods, Cambridge Monographs on Applied and Computational Mathematics, vol. 1. Cambridge University Press, Cambridge (1996).  https://doi.org/10.1017/CBO9780511626357 CrossRefGoogle Scholar
  13. 13.
    Fornberg, B.: A pseudospectral fictitious point method for high order initial-boundary value problems. SIAM J. Sci. Comput. 28(5), 1716–1729 (2006).  https://doi.org/10.1137/040611252. (electronic)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Fornberg, B., Flyer, N.: Solving PDEs with radial basis functions. Acta Numer. 24, 215–258 (2015).  https://doi.org/10.1017/S0962492914000130 MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Fornberg, B., Whitham, G.B.: A numerical and theoretical study of certain nonlinear wave phenomena. Philos. Trans. R. Soc. Lond. Ser. A 289(1361), 373–404 (1978).  https://doi.org/10.1098/rsta.1978.0064 MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Hesthaven, J.S.: Spectral penalty methods. In: Proceedings of the Fourth International Conference on Spectral and High Order Methods (ICOSAHOM 1998) (Herzliya), vol. 33, pp. 23–41 (2000).  https://doi.org/10.1016/S0168-9274(99)00068-9
  17. 17.
    Hesthaven, J.S., Gottlieb, D.: A stable penalty method for the compressible Navier–Stokes equations. I. Open boundary conditions. SIAM J. Sci. Comput. 17(3), 579–612 (1996).  https://doi.org/10.1137/S1064827594268488 MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Hon, Y.C., Schaback, R.: On unsymmetric collocation by radial basis functions. Appl. Math. Comput. 119(2–3), 177–186 (2001).  https://doi.org/10.1016/S0096-3003(99)00255-6 MathSciNetzbMATHGoogle Scholar
  19. 19.
    Hu, J., Zheng, K.: Two conservative difference schemes for the generalized Rosenau equation. Bound. Value Probl. 2010, 543503 (2010).  https://doi.org/10.1155/2010/543503 MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Kansa, E.J.: Motivation for using radial basis functions to solve PDEs. Tech. rep. (1999). Available at http://www.rbf-pde.org/kansaweb.pdf
  21. 21.
    Kim, Y.D., Lee, H.Y.: The convergence of finite element Galerkin solution for the Roseneau equation. Korean J. Comput. Appl. Math. 5(1), 171–180 (1998)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Larsson, E., Fornberg, B.: A numerical study of some radial basis function based solution methods for elliptic PDEs. Comput. Math. Appl. 46(5–6), 891–902 (2003).  https://doi.org/10.1016/S0898-1221(03)90151-9 MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Larsson, E., Shcherbakov, V., Heryudono, A.: A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. (2017) (to appear)Google Scholar
  24. 24.
    Lee, H.Y., Ahn, M.J.: The convergence of the fully discrete solution for the Roseneau equation. Comput. Math. Appl. 32(3), 15–22 (1996).  https://doi.org/10.1016/0898-1221(96)00110-1 MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Micchelli, C.A.: Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constr. Approx. 2(1), 11–22 (1986).  https://doi.org/10.1007/BF01893414 MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Omrani, K., Abidi, F., Achouri, T., Khiari, N.: A new conservative finite difference scheme for the rosenau equation. Appl. Math. Comput. 201(1–2), 35–43 (2008).  https://doi.org/10.1016/j.amc.2007.11.039 MathSciNetzbMATHGoogle Scholar
  27. 27.
    Park, M.A.: Model equations in fluid dynamics. Ph.D. thesis, Tulane University (1990)Google Scholar
  28. 28.
    Park, M.A.: Pointwise decay estimates of solutions of the generalized Rosenau equation. J. Korean Math. Soc. 29(2), 261–280 (1992)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Petzold, L.R.: Numerical solution of differential-algebraic equations. In: Theory and numerics of ordinary and partial differential equations (Leicester, 1994), Adv. Numer. Anal., IV, pp. 123–142. Oxford University Press, New York (1995).  https://doi.org/10.1137/1.9781611971224
  30. 30.
    Rieger, C., Zwicknagl, B.: Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning. Adv. Comput. Math. 32(1), 103–129 (2010).  https://doi.org/10.1007/s10444-008-9089-0 MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Rosenau, P.: Dynamics of dense discrete systems: high order effects. Prog. Theor. Phys. 79(5), 1028–1042 (1988)CrossRefGoogle Scholar
  32. 32.
    Safdari-Vaighani, A., Heryudono, A., Larsson, E.: A radial basis function partition of unity collocation method for convection–diffusion equations arising in financial applications. J. Sci. Comput. 64(2), 341–367 (2015).  https://doi.org/10.1007/s10915-014-9935-9 MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Schoenberg, I.J.: Metric spaces and completely monotone functions. Ann. of Math. (2) 39(4), 811–841 (1938).  https://doi.org/10.2307/1968466 MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Singer, M.F.: Solving homogeneous linear differential equations in terms of second order linear differential equations. Am. J. Math. 107(3), 663–696 (1985).  https://doi.org/10.2307/2374373 MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Trefethen, L.N.: Spectral methods in MATLAB, Software, Environments, and Tools, vol. 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000).  https://doi.org/10.1137/1.9780898719598

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Ali Safdari-Vaighani
    • 1
  • Elisabeth Larsson
    • 2
    Email author
  • Alfa Heryudono
    • 3
  1. 1.Department of Mathematical and Computer SciencesAllameh Tabataba’i UniversityTehranIran
  2. 2.Department of Information TechnologyUppsala UniversityUppsalaSweden
  3. 3.Department of MathematicsUniversity of Massachusetts DartmouthDartmouthUSA

Personalised recommendations