1 Introduction

General linear methods have been extensively studied for solving differential equations. Among the large family of general linear methods the diagonally implicit multistage integration methods (DIMSIMs) in [1] are the special cases, which exhibits considerable potential for efficient implementation, providing the global error of the same order as the local truncation error. In [2], it was demonstrated that finite difference methods for PDEs can be constructed such that their convergence rates, or the order of their global errors, are higher than the order of the truncation errors. Following this idea, Ditkowski and Gottlieb devised the error inhibiting strategy in [3] by inhibiting the lowest order term in the truncation error from accumulating over time and thus showed that the global error of the scheme is one order higher than the local truncation error. The key idea of this method is to construct a coefficient matrix that has the null space where the local truncation error resides.

In this work, we further improved the original error inhibiting method by using the radial basis function (RBF) approximations. The main idea of the proposed method is to replace the polynomial basis with the RBF basis for the reconstruction of the solution. With a new RBF basis, a free parameter, known as the shape parameter, is introduced. By exploiting the new parameter, the error could be further reduced resulting in a higher order of convergence than the original method. The main advantage is that the proposed method does not need any other additional conditions rather than the given conditions, so it is efficient to implement.

The next section will review the explicit error inhibiting block one-step method. In Sect. 3, we will explain the RBF method. In Sect. 4, we show how the new method can be derived followed by Sect. 5 where numerical results are provided verifying that the convergence rate of the proposed method is increased by one order. A brief conclusion and an outline of our future research are presented in Sect. 6.

2 Error Inhibiting Block One-Step Method

Consider the initial value problem for the first-order ODE below

$$\displaystyle \begin{aligned} \begin{aligned} & u^{\prime}(t) = f(t,u(t)),~t \geqslant a \\ & u(a) = u_a \end{aligned} \end{aligned} $$
(1)

where we assume f(t, u) is uniformly Lipschitz continuous in u and continuous in t. We choose a value h for the step size and set t n = nh a discrete sequence in the time domain. Denote the numerical approximation of the solution u(t n) by v n.

Define the solution vector U n by

$$\displaystyle \begin{aligned} U_n = \left[ u_{n+\frac{s-1}{s}}, \cdots, u_{n+\frac{1}{s}}, u_n \right]^T , \end{aligned}$$

where \(u_{n+\frac {j}{s}} = u(t_{n+jh/s})\) is the exact solution at \(t = t_n+\frac {jh}{s}\) for j = 0, ⋯ , s − 1. The corresponding approximation vector V n is defined as

$$\displaystyle \begin{aligned} V_n = \left[ v_{n+\frac{s-1}{s}}, \cdots, v_{n+\frac{1}{s}}, v_n \right]^T . \end{aligned}$$

In [3], the scheme is formulated as

$$\displaystyle \begin{aligned} V_{n+1} = Q V_n \end{aligned} $$
(2)

where the operator Q is represented by the following

$$\displaystyle \begin{aligned} Q = A + hBf \end{aligned}$$

and \(A, B \in \mathbb {R}^{s \times s}\). There are 4 sufficient conditions imposed on the matrices A and B in order to be error inhibiting:

  1. 1.

    rank(A) = 1.

  2. 2.

    The only non-zero eigenvalue of A is 1 and its corresponding eigenvector is

    $$\displaystyle \begin{aligned}{}[1, \cdots, 1]^T. \end{aligned}$$
  3. 3.

    A can be diagonalized.

  4. 4.

    The matrices A and B are constructed such that when the local truncation error is multiplied by the discrete solution operator, we have

    $$\displaystyle \begin{aligned} ||Q \tau_{\nu}|| \leqslant O(h) \cdot ||\tau_{\nu}||. \end{aligned}$$

    This is accomplished by requiring that the leading order term of the local truncation error is in the eigenspace of A associated with the zero eigenvalue.

We derive those matrices of A and B with symbolic computation. As an example of the derivation of the error inhibiting method, we consider the construction of the scheme with s = 2. The solution vector is then

$$\displaystyle \begin{aligned} U_n = [ u_{n+1/2}, u_n ]^T , \end{aligned}$$

and the corresponding approximation vector is given by

$$\displaystyle \begin{aligned} V_n = [ v_{n+1/2}, v_n ]^T . \end{aligned}$$

In order to satisfy those conditions listed above we first select

$$\displaystyle \begin{aligned} A = \begin{bmatrix} 1-\upsilon & \upsilon \\ 1-\upsilon & \upsilon \end{bmatrix} , \end{aligned} $$
(3)

which can be diagonalized as

$$\displaystyle \begin{aligned} A = \begin{bmatrix} 1-\upsilon & \upsilon \\ 1-\upsilon & \upsilon \end{bmatrix} = \begin{bmatrix} \upsilon-1 & \upsilon \\ \upsilon-1 & \upsilon-1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} -1 & \frac{\upsilon}{\upsilon-1} \\ 1 & -1 \end{bmatrix}. \end{aligned} $$
(4)

Then conditions 1, 2 and 3 are satisfied. Further suppose that

$$\displaystyle \begin{aligned} B = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} . \end{aligned} $$
(5)

Then

$$\displaystyle \begin{aligned} V_{n+1} = \begin{bmatrix} 1-\upsilon & \upsilon \\ 1-\upsilon & \upsilon \end{bmatrix} V_n + h \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} \begin{bmatrix} f_{n+1/2}\\ f_n \end{bmatrix} \end{aligned} $$
(6)

where f n+1∕2 = f(t n+1∕2, v n+1∕2) and f n = f(t n, v n). The components of V n+1 are

$$\displaystyle \begin{aligned} v_{n+3/2} & = (1-\upsilon) v_{n+1/2} + \upsilon v_n + h(b_{11} f_{n+1/2}+ b_{12} f_n), \\ v_{n+1} & = (1-\upsilon) v_{n+1/2} + \upsilon v_n + h(b_{21} f_{n+1/2}+ b_{22} f_n). \end{aligned} $$

We write each difference equation in the form of error normalized by the step size and then insert the exact solutions to the ODE into the difference equation. Expanding u n+3∕2, u n+1 and u n+1∕2 around t = t n in Taylor series gives the local truncation error

$$\displaystyle \begin{aligned} \boldsymbol{\tau}_{n+1} = (\tau_{n+3/2}, \tau_{n+1})^T , \end{aligned}$$

where

$$\displaystyle \begin{aligned} \tau_{n+3/2} = \frac{1}{2} (2-2b_{11}-2b_{12}+\upsilon) u^{\prime}_n {} & + \frac{1}{8} (8-4b_{11}+\upsilon) u^{\prime\prime}_n h \\ & + \frac{1}{48} (26-6b_{11}+\upsilon) u^{(3)}_n h^2 + O(h^3), {} \end{aligned} $$
(7)
$$\displaystyle \begin{aligned} \tau_{n+1} = \frac{1}{2} (1-2b_{21}-2b_{22}+\upsilon) u^{\prime}_n {} & + \frac{1}{8} (3-4b_{21}+\upsilon) u^{\prime\prime}_n h \\ & + \frac{1}{48}(7-6 b_{21}+\upsilon) u^{(3)}_n h^2 + O(h^3). {} \end{aligned} $$
(8)

Vanishing the coefficients of the constant term and the term h in (7) and (8), and equating the quotient of the coefficient of the terms h 2 in (7) and (8) to \(\frac {\upsilon }{\upsilon -1}\), the condition 4 is satisfied. Finally we have the desired scheme as in [3]

$$\displaystyle \begin{aligned} V_{n+1} = \frac{1}{6} \begin{bmatrix} -1 & 7 \\ -1 & 7 \end{bmatrix} V_n + \frac{h}{24} \begin{bmatrix} 55 & -17 \\ 25 & 1 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix}, \end{aligned} $$
(9)

and correspondingly the local truncation error is 2nd order convergent as expected

$$\displaystyle \begin{aligned} \boldsymbol{\tau}_n = \frac{23}{576} \begin{bmatrix} 7 \\ 1 \end{bmatrix} u^{(3)}_n h^2 + O(h^3). \end{aligned} $$
(10)

3 RBF Interpolation

Now we briefly explain the RBF interpolation in one dimension. Suppose that for a domain \(\Omega \subset \mathbb {R}\), a data set \(\{ (x_i, u_i) \}^N_{i=0}\) is given where u i is the value of the unknown function u(x) at x = x i ∈ Ω. We use the RBFs \(\phi : \Omega \to \mathbb {R}\) defined by ϕ(x) = ϕ(|x − x i|, 𝜖 i), where |x − x i| is the distance between x and x i and 𝜖 i is the shape parameter. The reconstruction of a function, u(x), is then made by a linear combination of RBFs

$$\displaystyle \begin{aligned} I^R_N~u(x) = \sum_{i=0}^N \lambda_i \phi (|x - x_i|, \epsilon_i), \end{aligned} $$
(11)

where λ i are the expansion coefficients to be determined. Using the interpolation condition \(I^R_N u(x_i) = u_i,~i = 0, \dotsb , N\), we could find the expansion coefficients λ i by solving the linear system

$$\displaystyle \begin{aligned} \begin{bmatrix} \phi (|x_0 - x_0|, \epsilon_0) & \phi (|x_0 - x_1|, \epsilon_1) & \cdots & \phi (|x_0 - x_N|, \epsilon_N) \\ \phi (|x_1 - x_0|, \epsilon_0) & \phi (|x_1 - x_1|, \epsilon_1) & \cdots & \phi (|x_1 - x_N|, \epsilon_N) \\ \vdots & \vdots & & \vdots \\ \phi (|x_N - x_0|, \epsilon_0) & \phi (|x_N - x_1|, \epsilon_1) & \cdots & \phi (|x_N - x_N|, \epsilon_N) \end{bmatrix} \cdot \begin{bmatrix} \lambda_0 \\ \lambda_1 \\ \vdots \\ \lambda_N \end{bmatrix} = \begin{bmatrix} u_0 \\ u_1 \\ \vdots\\ u_N \end{bmatrix} . \end{aligned} $$
(12)

We choose the multiquadric RBF with all the shape parameters equal. Then the interpolation matrix, A, becomes a symmetric matrix with all diagonal entries 1,

$$\displaystyle \begin{aligned} A = \begin{bmatrix} 1 & \sqrt{1 + \epsilon^2 (x_0 - x_1)^2} & \cdots & \sqrt{1 + \epsilon^2 (x_0 - x_N)^2} \\ \sqrt{1 + \epsilon^2 (x_1 - x_0)^2} & 1 & \cdots & \sqrt{1 + \epsilon^2 (x_1 - x_N)^2} \\ \vdots & \vdots & & \vdots \\ \sqrt{1 + \epsilon^2 (x_N - x_0)^2} & \sqrt{1 + \epsilon^2 (x_N - x_1)^2} & \cdots & 1 \end{bmatrix} . \end{aligned} $$
(13)

Consider the case of 3 equally spaced nodes x 0, x 1, x 2 with x 0 < x 1 < x 2. Let h be the grid spacing. Then the linear system becomes

$$\displaystyle \begin{aligned} \begin{bmatrix} 1 & \sqrt{1 + \epsilon^2 h^2} & \sqrt{1 + 4 \epsilon^2 h^2} \\ \sqrt{1 + \epsilon^2 h^2} & 1 & \sqrt{1 + \epsilon^2 h^2} \\ \sqrt{1 + 4 \epsilon^2 h^2} & \sqrt{1 + \epsilon^2 h^2} & 1 \end{bmatrix} \cdot \begin{bmatrix} \lambda_0 \\ \lambda_1 \\ \lambda_2 \end{bmatrix} = \begin{bmatrix} u_0 \\ u_1 \\ u_2 \end{bmatrix} . \end{aligned} $$
(14)

By the closed-form expression for the RBF interpolant in [4],

$$\displaystyle \begin{aligned} I^R_2 u(x) = \sum_{i=0}^2 \frac{u_i}{\det (A)} \det (A_i(x)). \end{aligned} $$
(15)

where A i(x), a 3 × 3 matrix, is obtained by replacing the ith row of A with the row vector

$$\displaystyle \begin{aligned} \left[ \sqrt{1 + \epsilon^2 (x-x_0)^2} \quad \sqrt{1 + \epsilon^2 (x-x_1)^2} \quad \sqrt{1 + \epsilon^2 (x-x_2)^2} \right]. \end{aligned}$$

Differentiating the interpolant, we obtain the first-order derivative

$$\displaystyle \begin{aligned} \frac{d}{dx} I^R_2 u(x) = \sum_{i=0}^2 \frac{u_i}{\det (A)} \cdot \frac{d}{dx} \det (A_i(x)). \end{aligned} $$
(16)

We then estimate the derivative of u at x = x 1 as we do in polynomial interpolation for the central difference formula:

$$\displaystyle \begin{aligned} \frac{d}{dx} I^R_2 u(x_1) = \frac{\sqrt{1 + 4 \epsilon^2 h^2} + 1}{ 4 h \sqrt{1 + \epsilon^2 h^2}} (u_2 - u_0). \end{aligned} $$
(17)

By employing the Taylor expansion of the quotient on the right-hand side of (17), we have

$$\displaystyle \begin{aligned} \frac{d}{dx} I^R_2 u(x_1) = \left[ \frac{1}{2h} + \epsilon^2 \frac{h}{4} + O(h^3) \right] (u_2 - u_0). \end{aligned} $$
(18)

The main feature of the RBF method is that it contains a free parameter, 𝜖, which we could make use of to further inhibit the errors. In the following section, we will show that using the parameter 𝜖 coupled with h p terms, where \(p \geqslant 2\), we can increase the order of local truncation error and further promote the order of global error by adopting the error inhibiting scheme.

4 Construction of the Improved Error Inhibiting Scheme

Following the main feature of the RBF method explained in the preceding section, we try to establish a similar explicit block one-step scheme that provides a higher order of convergence by adding one more block of the shape parameters 𝜖 1 and 𝜖 2 coupled with h p term. With p = 3, we have

$$\displaystyle \begin{aligned} V_{n+1} = \begin{bmatrix} 1-\upsilon & \upsilon \\ 1-\upsilon & \upsilon \end{bmatrix} V_n + h \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix} + h^3 \begin{bmatrix} 0 & \epsilon_1 \\ 0 & \epsilon_2 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix}. \end{aligned} $$
(19)

We measure the one-step error normalized by the step size as in Sect. 2. Expanding u n+3∕2, u n+1 and u n+1∕2 around t = t n in Taylor series again yields the local truncation error

$$\displaystyle \begin{aligned} \boldsymbol{\tau}_{n+1} = [ \tau_{n+3/2}, \tau_{n+1} ]^T , \end{aligned}$$

where

$$\displaystyle \begin{aligned} \tau_{n+3/2} = {} & \frac{1}{2} (2-2b_{11}-2b_{12}+\upsilon) u^{\prime}_n + \frac{1}{8} (8-4b_{11}+\upsilon) u^{\prime\prime}_n h~+ \\ & \left( -\epsilon_1 u^{\prime}_n + \frac{1}{48} (26-6b_{11}+\upsilon) u^{(3)}_n \right) h^2 + \frac{1}{384} (80-8b_{11}+\upsilon) u^{(4)}_n h^3 + O(h^4), {} \end{aligned} $$
(20)
$$\displaystyle \begin{aligned} \tau_{n+1} = {} & \frac{1}{2} (1-2b_{21}-2b_{22}+\upsilon) u^{\prime}_n + \frac{1}{8} (3-4b_{21}+\upsilon) u^{\prime\prime}_n h~+ \\ & \left( -\epsilon_2 u^{\prime}_n + \frac{1}{48}(7-6 b_{21}+\upsilon) u^{(3)}_n \right) h^2 + \frac{1}{384} (15-8b_{21}+\upsilon) u^{(4)}_n h^3 + O(h^4). {} \end{aligned} $$
(21)

Annihilating the first two terms in (20) and (21), and equating the quotient of the coefficient of the terms h 3 in (20) and (21) to \(\frac {\upsilon }{\upsilon -1}\), we have the scheme

$$\displaystyle \begin{aligned} V_{n+1} = \frac{1}{7} \begin{bmatrix} -1 & 8 \\ -1 & 8 \end{bmatrix} V_n + \frac{h}{28} \begin{bmatrix} 64 & -20 \\ 29 & 1 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix} + h^3 \begin{bmatrix} 0 & \epsilon_1 \\ 0 & \epsilon_2 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix} . \end{aligned} $$
(22)

We can easily check that the scheme (22) satisfies those four conditions in Sect. 2. Further annihilating the coefficients of the term h 2, we get the optimal values of 𝜖 1 and 𝜖 2:

$$\displaystyle \begin{aligned} \epsilon_1 & = \frac{47 u^{(3)}_n}{168 u^{\prime}_n} {} , \end{aligned} $$
(23)
$$\displaystyle \begin{aligned} \epsilon_2 & = \frac{9 u^{(3)}_n}{224 u^{\prime}_n} {} . \end{aligned} $$
(24)

Our new scheme has the truncation error

$$\displaystyle \begin{aligned} \tau_n = \frac{55}{2688} \begin{bmatrix} 8 \\ 1 \end{bmatrix} u^{(4)}_n h^3 + O(h^4). \end{aligned} $$
(25)

Note that in our new scheme, we need the value of \(u^{(3)}_n\) at each step. This higher derivative can be computed by repeated differentiation of the function f on the right-hand side of (1) twice. However, we choose to estimate the third-order derivative. For \(u^{\prime }_n\), we use the given condition from (1), i.e. u′(t) = f(t, u(t)). For the third-order derivative \(u^{(3)}_n\), we employ the second-order central difference formula for f″(t, u(t)) at t = t n as

$$\displaystyle \begin{aligned} u^{(3)}_n = f''(t_n, u_n) = \frac{4 ( f_{n+1/2} + 2 f_n - f_{n-1/2} )}{h^2}, \end{aligned} $$
(26)

where f n+1∕2, f n and f n−1∕2 are given values. For this computation, no additional conditions are necessary. The truncation error is still third order accurate, O(h 3), as in (25), so by the error inhibiting strategy we end up with a global error that is O(h 4), which will soon be confirmed in the following section.

We conclude this section with a comparison of three methods. For DIMSIM of type 3,

$$\displaystyle \begin{aligned} \begin{bmatrix} v_{n+2} \\ v_{n+1} \end{bmatrix} = \frac{1}{4} \begin{bmatrix} 7 & -3 \\ 7 & -3 \end{bmatrix} \begin{bmatrix} v_{n+1} \\ v_n \end{bmatrix} + \frac{h}{8} \begin{bmatrix} 9 & -7 \\ -3 & -3 \end{bmatrix} \begin{bmatrix} f_{n+1} \\ f_n \end{bmatrix} , \end{aligned} $$

two steps v n and v n+1 are employed to update the step v n+1 and obtain the step v n+2. For error inhibiting scheme,

$$\displaystyle \begin{aligned} \begin{bmatrix} v_{n+3/2} \\ v_{n+1} \end{bmatrix} = \frac{1}{6} \begin{bmatrix} -1 & 7 \\ -1 & 7 \end{bmatrix} \begin{bmatrix} v_{n+1/2} \\ v_n \end{bmatrix} + \frac{h}{24} \begin{bmatrix} 55 & -17 \\ 25 & 1 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \end{bmatrix} , \end{aligned} $$

two steps v n and v n+1∕2 are involved to generate the next two steps v n+1 and v n+3∕2. For our method (if we utilize (26) and substitute (23), (24) for respective 𝜖 1 and 𝜖 2 in (22) to avoid the denominator zero),

$$\displaystyle \begin{aligned} \begin{bmatrix} v_{n+3/2} \\ v_{n+1} \\ v_{n+1/2} \end{bmatrix} = \frac{1}{7} \begin{bmatrix} -1 & 8 & 0 \\ -1 & 8 & 0 \\ 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} v_{n+1/2} \\ v_n \\ v_{n-1/2} \end{bmatrix} + \frac{h}{168} \begin{bmatrix} 572 & -496 & 188 \\ 201 & -48 & 27 \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} f_{n+1/2} \\ f_n \\ f_{n-1/2} \end{bmatrix} , \end{aligned} $$

we make use of previous three steps v n−1∕2, v n and v n+1∕2 to evolve the next two steps v n+1 and v n+3∕2. As for the cost, it seems that the error inhibiting method is more computationally expensive than some other methods because of the stability condition, which will be investigated in our future work.

5 Numerical Results

We start with the nonlinear first-order differential equation used in [3]

$$\displaystyle \begin{aligned} \begin{aligned} & u' = -u^2,~t \geqslant 0 \\ & u(0) = 1. \end{aligned} \end{aligned} $$
(27)

The exact solution of the example is u(t) = 1∕(t + 1). The left figure of Fig. 1 shows each global error at the time t = 1 versus N in logarithmic scale for the type-3 DIMSIM (blue), the original error inhibiting scheme (red) and our proposed method (green). As seen in the figure, our proposed method is the most accurate among those three methods and yields high order convergence which is 4th order. Table 1 shows the convergence with N for (27). The type-3 DIMSIM yields the 2nd order accuracy, the original error inhibiting scheme yields the 3rd order accuracy and our proposed method yields the 4th order accuracy.

Table 1 Global error and order of convergence for u′ = −u 2 with u(0) = 1
Fig. 1
figure 1

Global error versus N in logarithmic scale. Left: (27). Right: (28). Blue: type-3 DIMSIM (DIMSIM3) 2nd order. Red: error inhibiting scheme (EIS) 3rd order. Green: our proposed method (EIS with h 3) 4th order

Next we proceed to a problem in [5] where the solution changes rapidly between [−2, 2]

$$\displaystyle \begin{aligned} \begin{aligned} & u' = -4 t^3 u^2,~t \geqslant -10 \\ & u(-10) = 1/10001. \end{aligned} \end{aligned} $$
(28)

The exact solution is u(t) = 1∕(t 4 + 1). The right figure of Fig. 1 shows each global error at the final time t = 0 versus N in logarithmic scale for the type-3 DIMSIM (blue), the original error inhibiting method (red) and our proposed method (green). We verify again that our proposed method is indeed the most accurate and yields the highest order of convergence. Table 2 shows the convergence with N for (28). Although the type-3 DIMSIM does not reveal the 2nd order accuracy in the beginning, it eventually exhibits the order of accuracy as expected. The original error inhibiting scheme is 3rd order accurate and our proposed method 4th order accurate.

Table 2 Global error and order of convergence for u′ = −4t 3 u 2 with u(−10) = 1∕10001

6 Conclusions

In this note, we modified and improved the original error inhibiting block one-step method proposed in [3] by means of the RBF basis, where a free parameter is introduced. By exploiting the parameter with the given conditions of ODEs, the local truncation error is further reduced resulting in higher order of the global error. It is numerically demonstrated that, with the proposed method, the local truncation error is of the 3rd order and the global error of the 4th order. As mentioned in Sect. 4, we will investigate the stability of the error inhibiting method and our proposed method as well as relaxing the fourth constraint in error inhibiting method in our future research.