1 Introduction

Fractional calculus is known as a generalization of the derivative and integral of non-integer order. The birth of fractional calculus goes as Leibniz and Newton’s differential calculus. Leibniz firstly introduced fractional order derivatives of non-integer order for the function \(f(x)=e^{mx}\), \(m\in R\) as follows:

$$ \frac{d^{n}e^{mx}}{dx^{n}}=m^{n}e^{mx}, $$

where n is non-integer value.

Later, this frame of derivatives was studied by Liouville, Riemann, Weyl, Lacroix, Leibniz, Grunward, Letnikov, etc. (cf. [1]).

Since the inception of the definition of fractional order derivatives created by Leibniz, fractional partial differential equations have drawn attention of many mathematicians and have also shown an increasing development (cf. [214] etc.). Recently, analytical solutions of fractional differential equations have been obtained by the authors in [15, 16]. Furthermore, there exist many various applications of fractional partial differential equations in physics and engineering such as viscoelastic mechanics, power-law phenomenon in fluid and complex network, biology and ecology of allometric measurement legislation, colored noise, the electrode–electrolyte polarization, dielectric polarization, electromagnetic waves, numerical finance, etc. (cf. [1721]).

Beside these facts, the combined KdV–mKdV equation, considered as the combination of KdV equation and mKdV equations, drew attention of several authors (cf. [2230] etc.). The combined KdV–mKdV equation is one of the most popular equations in soliton physics and wave propagation of bound particle. The combined KdV–mKdV equation can be expressed as follows:

$$ u_{t}+\alpha uu_{x}+\beta u^{2}u_{x}+su_{xxx}=0, $$
(1)

where α, β, and s are real constants. In general, the fractional combined KdV–mKdV equation is given in the following form for \(\alpha=2\), \(\beta=3\), \(s=-1\):

$$ \frac{\partial^{\gamma}u}{\partial t^{\gamma}}+2u\frac{\partial u}{\partial x}+3u^{2}\frac{\partial u}{\partial x}- \frac{\partial^{3}u}{\partial x^{3}}=0 $$
(2)

with the initial condition

$$ u(x,0)=f ( x ) $$
(3)

and with boundary conditions

$$ u(a,t)=\beta_{1},\qquad u(b,t)=\beta_{2},\quad t\geq t_{0}. $$
(4)

In recent years, the collocation method has been a useful alternative tool to obtain numerical solutions since this method yields multiple numerical solutions depending on whether numerical methods such as finite differences, Runge–Kutta and Crank–Nicolson methods yield only numerical solutions. Using a few numbers of collocation points, this method has been widely studied by various authors to obtain high accuracy in numerical analysis (cf. [3136]). On the other hand, radial basis functions are univariate functions which depend only on the distance between points and they are attractive to high dimensional differential equations. Furthermore, implementation and coding of the collocation method are very practical by using these bases. However, this method usually gives very efficient results as the number of node points is increased. We will see that it is not true when finding numerical solutions of the fractional combined KdV–mKdV equation in the present paper. This situation led us to examine the geometry of numerical solutions.

In the present paper, we obtain the exact solution of the fractional combined KdV–mKdV equation by using the \((1/G^{\prime })\) expansion method. With the help of radial basis functions, we apply the collocation method to this equation and obtain numerical solutions. We recognize that numerical solutions are more accurate for \(h=0.1\) than for \(h=0.01\). Therefore, we investigate the exact solution of the combined KdV–mKdV equation in a Lorentz–Minkowski space. Furthermore, we compute the Gauss curvature and the mean curvature of the exact solution and give a geometrical interpretation of these curvatures at the node points of our numerical solution. Finally, we observe that the exact solution contains some degenerate points in the Lorentz–Minkowski space at \(h=0.01\).

2 Analysis of \((1/G^{\prime})\)-expansion method

The \((1/G^{\prime})\) expansion method is used to obtain traveling wave solutions in nonlinear differential equation. In this section, we shall firstly mention a simple description of the \((1/G^{\prime})\)-expansion method by following [37]. Later, we shall obtain the exact solution of the combined KdV–mKdV equation by using this method.

Let us consider the following two-variable general form of nonlinear partial differential equation:

$$ Q \biggl( u,\frac{\partial u}{\partial x},\frac{\partial^{2}u}{\partial x^{2}},\ldots \biggr) =0. $$
(5)

If we apply \(u=u ( x,t ) =u ( \xi ) \), \(\xi=x-Vt\) and V is a constant in Eq. (5), we get a nonlinear ordinary differential equation for \(u ( \xi ) \) as follows:

$$ Q \bigl( u^{\prime},u^{\prime\prime}\ldots \bigr) =0. $$
(6)

Now, assume that a solution of Eq. (6) can be stated as a polynomial in \((1/G^{\prime})\) by

$$ u ( \xi ) =a_{0}+\overset{m}{\underset{i=1}{\sum }}a_{i} \biggl( \frac{1}{G^{\prime}} \biggr) ^{i}, $$
(7)

where \(a_{i}\) (\(i=0,1,2,\ldots,m\)), m is a positive integer determined by balancing the highest order derivative with the highest nonlinear terms in Eq. (6), and \(G=G ( \xi ) \) satisfies the following second order linear ordinary differential equation:

$$ G^{\prime\prime}+\lambda G^{\prime}+\mu=0. $$
(8)

Here, μ and λ are constants.

The method is constructed as follows.

Firstly, if we substitute solution (7) into Eq. (6), then we obtain the second order IODE given in (8). Later, using (8), we have a set of algebraic equations of the same order of \((1/G^{\prime})\) which have to vanish. That is, all coefficients of the same order have to vanish. After we have manipulated these algebraic equations, we can find \(a_{i}\), \(i\geq0\), and V are constants and then, substituting \(a_{i}\) and the general solutions of Eq. (8) into (7), we can obtain solutions of Eq. (5).

Example 2.1

Let us consider Eq. (2) for \(\gamma=1\). When balancing \(u^{2}u_{x}\) with \(u_{xxx}\), it is obtained that \(m=1\). Thus, putting \(u=u ( x,t ) =u ( \xi ) \), \(\xi=x-Vt\) and taking integral in Eq. (2), we get

$$ c-Vu+u^{2}+u^{3}-u^{\prime\prime}=0. $$
(9)

Now let us write

$$ u ( \xi ) =a_{0}+a_{1} \biggl( \frac{1}{G^{\prime}} \biggr) . $$
(10)

Substituting Eq. (9) into Eq. (10), we have a group of algebraic equations for the coefficients \(a_{0}\), \(a_{1}\), δ, μ, c, λ, and V. These systems are given as follows:

$$ \begin{aligned} &2a_{0}a_{1}+3a_{0}^{2}a_{1}-a_{1}c-a_{1} \lambda^{2}=0, \\ &a_{1}^{2}+3a_{0}a_{1}^{2}-3a_{1} \lambda\mu=0, \\ &a_{1}^{3}+2a_{1}\mu^{2}=0. \end{aligned} $$
(11)

If we find the solutions of system (11) with the aid of Mathematica, then the following cases occur.

Case 1. If we put

$$ a_{0}=\frac{1}{6} ( -2-3\sqrt{2}\lambda ) ,\qquad a_{1}=-\sqrt{2}\mu, \qquad V=\frac{1}{6} \bigl( -2+3 \lambda^{2} \bigr) $$
(12)

and substitute \(a_{0}\) and \(a_{1}\) values into (10), we have the following two types of wave solutions of Eq. (2):

$$\begin{aligned}& \xi=x-\frac{1}{6} \bigl( -2+3\lambda^{2} \bigr) t, \end{aligned}$$
(13)
$$\begin{aligned}& u_{1} ( \xi ) =\frac{1}{6} ( -2-3\sqrt{2}\lambda ) +\sqrt{2}\mu \biggl( \frac{1}{-\frac{\mu}{\lambda}+\cosh(\xi\lambda )-\sinh(\xi\lambda)} \biggr) \end{aligned}$$
(14)

(see Fig. 1).

Figure 1
figure 1

Exact solution \(u_{1} ( x,t ) \) of Eq. (2) by substituting the values \(\lambda=0.834\), \(\mu =1\), \(\alpha=0.8\), \(-3\leq x\leq3\), \(-4\leq t\leq4\), \(\gamma=1\) for the 2D graphic

Case 2. If we put

$$ a_{0}=\frac{1}{6} ( -2+3\sqrt{2}\lambda ) ,\qquad a_{1}=\sqrt {2}\mu,\qquad V=\frac{1}{6} \bigl( -2+3 \lambda^{2} \bigr) $$
(15)

and substitute \(a_{0}\) and \(a_{1}\) values into (10), we have the following two types of wave solutions of Eq. (2):

$$\begin{aligned}& \xi = x+\frac{1}{6} \bigl( 2-3\lambda^{2} \bigr) t, \end{aligned}$$
(16)
$$\begin{aligned}& u_{2} ( \xi ) = \frac{1}{6} ( -2+3\sqrt{2}\lambda ) +\sqrt{2}\mu \biggl( \frac{1}{-\frac{\mu}{\lambda}+\cosh(\xi\lambda )-\sinh(\xi\lambda)} \biggr) \end{aligned}$$
(17)

(see Fig. 2).

Figure 2
figure 2

Exact solution \(u_{2} ( x,t )\) of Eq. (2) by substituting the values \(\lambda=0.834\), \(\mu=1\), \(\alpha=0.8\), \(-3\leq x\leq3\), \(-4\leq t\leq4\), \(\gamma=1\) for the 2D graphic

3 Collocation method using radial basis functions

In Eq. (2), \(\frac{\partial^{\gamma}u}{\partial t^{\gamma}}\) is the Caputo fractional derivative through L1 formula of \(u ( x,t )\), which can be written as follows:

$$ \frac{\partial^{\gamma}u}{\partial t^{\gamma}}= \textstyle\begin{cases} \frac{ ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) } \sum_{k=0}^{n} [ u_{m}^{n+1-k}-u_{m}^{n-k} ] [ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} ] ,&n\geq1, \\ \frac{ ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) } ( u_{m}^{1}-u_{m}^{0} ) ,& n=0. \end{cases} $$
(18)

Now, we discretize the time derivative in (2) by using the Caputo derivative defined in [38] through L1 formula and the space derivative by the Crank–Nicolson formula between two time levels n and \(n+1\), respectively. Thus, we have

$$\begin{aligned}& \frac{ ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) }\sum_{k=0}^{n} \bigl[ u_{m}^{n+1-k}-u_{m}^{n-k} \bigr] \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] +2\frac{ ( uu_{x} ) ^{n+1}+ ( uu_{x} ) ^{n}}{2} \\& \quad {} +3\frac{ ( u^{2}u_{x} ) ^{n+1}+ ( u^{2}u_{x} ) ^{n}}{2}-\frac{u_{xxx}^{n+1}+u_{xxx}^{n}}{2}=0. \end{aligned}$$
(19)

Nonlinear terms of the above equation can be linearized by using the following equations:

$$\begin{aligned}& \bigl( u^{2}u_{x} \bigr) _{m}^{n+1}=2u^{n}u^{n+1}u_{x}^{n}-2 \bigl( u^{2} \bigr) ^{n}u_{x}^{n}+ \bigl( u^{2} \bigr) ^{n}u_{x}^{n+1}, \end{aligned}$$
(20)
$$\begin{aligned}& ( uu_{x} ) _{m}^{n+1}=u_{x}^{n}u^{n+1}+u_{x}^{n+1}u^{n}-u_{x}^{n}u^{n}. \end{aligned}$$
(21)

By a straightforward computation in Eq. (19), it follows that

$$\begin{aligned}& \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) }\sum_{k=0}^{n} \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] \bigl[ u_{m}^{n-k+1}-u_{m}^{n-k} \bigr] +2u_{x}^{n}u^{n+1} \\& \quad {} +2u_{x}^{n+1}u^{n}+6u^{n}u^{n+1}u_{x}^{n}-6 \bigl( u^{2} \bigr) ^{n}u_{x}^{n} \\& \quad {} +3 \bigl( u^{2} \bigr) ^{n}u_{x}^{n+1}+3 \bigl( u^{2} \bigr) ^{n}u_{x}^{n}-u_{xxx}^{n+1}-u_{xxx}^{n}=0. \end{aligned}$$
(22)

Now, we shall use radial basis functions.

Radial basis functions are often very variable functions, and their values depend on the distance from the origin. Thus \(\phi ( x ) =\phi ( r ) \in \mathbb{R}\), \(x\in \mathbb{R}^{n}\) or it is the distance from a point \(\{ x_{j} \} \) of \(\phi ( x-x_{j} ) =\phi ( r_{j} ) \in \mathbb{R}\). We note each function providing \(\phi(x)=\phi ( \Vert x \Vert _{2} ) \). In general, norm \(r_{j}= \Vert x-x_{j} \Vert _{2}\) is considered to be the Euclidean distance. Globally supported radial basis functions throughout the present paper are given as follows:

$$\begin{aligned} \begin{aligned} &\text{Multiquadratic (MQ)} \quad \phi ( r_{j} ) =\sqrt{r_{j}^{2}+c^{2}}, \\ &\text{Inverse multiquadratic (IMQ)} \quad \phi ( r_{j} ) =1/\sqrt {r_{j}^{2}+c^{2}}, \\ &\text{Inverse quadratic (IQ)} \quad \phi ( r_{j} ) =1/ \bigl( r_{j}^{2}+c^{2} \bigr) , \\ &\text{Gaussian (G)} \quad \phi ( r_{j} ) =\exp \bigl( -c^{2}r_{j}^{2} \bigr) , \end{aligned} \end{aligned}$$
(23)

where c is the shape parameter. Let \(x_{i}\) (\(i=0,\ldots,n\)) be the collocation points in the interval \([ a,b ] \) such that \(x_{1}=a\) and \(x_{n}=b\). Then Eq. (2) is expressed with the following approximate solution:

$$ u ( x,t ) \approx\sum_{j=0}^{n} \lambda_{j}\phi_{j} ( r_{j} ) . $$
(24)

Here, n is the number of data points; ϕ is some radial basis functions; \(\lambda_{j}\) are unknown coefficients defined by collocation methods. Thus, for each collocation point, Eq. (24) becomes

$$ u ( x_{i},t ) \approx\sum_{j=0}^{n} \lambda_{j}\phi_{j} ( r_{ij} ) . $$
(25)

Putting Eq. (25) into Eq. (22) and writing the collocation points \(x_{i}\) (\(i=0,\ldots,n\)) instead of x, we obtain the following equations:

$$\begin{aligned}& \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) }\sum_{k=0}^{n} \sum_{j=0}^{n} \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] \bigl[ \lambda_{j}^{n-k+1}- \lambda _{j}^{n-k} \bigr] \phi_{j} ( r_{ij} ) \\& \quad {} +2\sum_{j=0}^{n} \lambda_{j}^{n+1}\phi_{j} ( r_{ij} ) \sum_{j=0}^{n}\lambda_{j}^{n} \phi_{j}^{\prime} ( r_{ij} ) +2\sum _{j=0}^{n}\lambda_{j}^{n+1} \phi_{j}^{\prime} ( r_{ij} ) \sum _{j=0}^{n}\lambda_{j}^{n} \phi_{j} ( r_{ij} ) \\& \quad {} +6\sum_{j=0}^{n} \lambda_{j}^{n}\phi_{j} ( r_{ij} ) \sum _{j=0}^{n}\lambda_{j}^{n+1} \phi_{j} ( r_{ij} ) \sum_{j=0}^{n} \lambda_{j}^{n}\phi_{j}^{\prime} ( r_{ij} ) \\& \quad {} -6 \Biggl( \sum_{j=0}^{n} \lambda_{j}^{n}\phi_{j} ( r_{ij} ) \Biggr) ^{2}\sum_{j=0}^{n} \lambda_{j}^{n}\phi _{j}^{\prime} ( r_{ij} ) +3 \Biggl( \sum_{j=0}^{n} \lambda _{j}^{n}\phi_{j} ( r_{ij} ) \Biggr) ^{2}\sum_{j=0}^{n} \lambda_{j}^{n+1}\phi_{j}^{\prime} ( r_{ij} ) \\& \quad {} +3 \Biggl( \sum_{j=0}^{n} \lambda_{j}^{n}\phi_{j} ( r_{ij} ) \Biggr) ^{2}\sum_{j=0}^{n} \lambda_{j}^{n}\phi _{j}^{\prime} ( r_{ij} ) -\sum_{j=0}^{n}\lambda _{j}^{n+1}\phi_{j}^{\prime\prime\prime} ( r_{ij} ) \\& \quad {} -\sum_{j=0}^{n} \lambda_{j}^{n}\phi_{j}^{\prime\prime\prime } ( r_{ij} ) =0 \end{aligned}$$
(26)

and

$$ u ( x_{i},t ) =\sum_{j=0}^{n} \lambda_{j}^{n+1}\phi_{j} ( r_{ij} ) = \alpha_{i},\quad i=0,1,\ldots,n. $$
(27)

Equations (26) and (27) display \((n+1)\) linear equation systems in \((n+1)\) unknown \(\lambda_{j}^{n+1}\) parameters. Before the solution of the system, boundary conditions \(u ( a,t ) =\alpha_{1}\) and \(u ( b,t ) =\alpha_{2}\) are applied. Thus, \(n\times n\) type equation systems are obtained in each point of the range from Eq. (25).

3.1 Stability analysis

In this subsection, we shall investigate the stability of this method with the help of Von-Neumann analysis.

The third order derivatives in (25) can be approximated using linear combinations of values of \(u^{k}( x)\) as follows:

$$ \frac{\partial^{3}u^{k} ( x ) }{\partial x^{3}}\bigg| _{x=x_{i}}=\sum_{i=0}^{n} \beta_{i}^{ ( k,i ) }u^{k} \bigl( x_{i}^{(k)} \bigr) , $$
(28)

where \(\{ \beta_{i}^{ ( k,i ) } \} _{i=1}^{N}\) is the RBF-FD coefficient corresponding to the third order derivatives. For simplicity, the stencils with three uniform nodes are used. It is well known that stability analysis is only applied to partial differential equations with constant coefficients. Let the stencils with three uniform nodes be used [39]. For \(x= \{ x_{m-1},x_{m},x_{m+1} \} \), we get

$$\begin{aligned}& \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) } \sum_{k=0}^{n} \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] \\& \quad {} \bigl[ \bigl(u_{m-1}^{n-k+1}+u_{m}^{n-k+1}+u_{m+1}^{n-k+1} \bigr)-\bigl(u_{m-1}^{n-k}+u_{m}^{n-k}+u_{m+1}^{n-k} \bigr) \bigr] \\& \quad {}+2B \bigl( u_{m-1}^{n+1}+u_{m}^{n+1}+u_{m+1}^{n+1} \bigr) \\& \quad {}+2A \bigl( \alpha_{m-1}^{ ( n+1,m-1 ) }u_{m-1}^{n}+ \alpha _{m}^{ ( n+1,m ) }u_{m}^{n}+ \alpha_{m+1}^{ ( n+1,m+1 ) }u_{m+1}^{n} \bigr) \\& \quad {}+6AB \bigl( u_{m-1}^{n+1}+u_{m}^{n+1}+u_{m+1}^{n+1} \bigr) \\& \quad {}-6A^{2} \bigl( \alpha_{m-1}^{ ( n,m-1 ) }u_{m-1}^{n}+ \alpha _{m}^{ ( n,m ) }u_{m}^{n}+ \alpha_{m+1}^{ ( n,m+1 ) }u_{m+1}^{n} \bigr) \\& \quad {}+3A^{2} \bigl( \alpha_{m-1}^{ ( n,m-1 ) }u_{m-1}^{n}+ \alpha _{m}^{ ( n,m ) }u_{m}^{n}+ \alpha_{m+1}^{ ( n,m+1 ) }u_{m+1}^{n} \bigr) \\& \quad {}+3A^{2} \bigl( \alpha_{m-1}^{ ( n+1,m-1 ) }u_{m-1}^{n}+ \alpha _{m}^{ ( n+1,m ) }u_{m}^{n}+ \alpha_{m+1}^{ ( n+1,m+1 ) }u_{m+1}^{n} \bigr) \\& \quad {}- \bigl( \beta_{m-1}^{ ( n+1,m-1 ) }u_{m-1}^{n+1}+ \beta _{m}^{ ( n+1,m ) }u_{m}^{n+1}+ \beta_{m+1}^{ ( n+1,m+1 ) }u_{m+1}^{n+1} \bigr) \\& \quad {}- \bigl( \beta_{m-1}^{ ( n,m-1 ) }u_{m-1}^{n}+ \beta _{m}^{ ( n,m ) }u_{m}^{n}+ \beta_{m+1}^{ ( n,m+1 ) }u_{m+1}^{n} \bigr) =0, \end{aligned}$$
(29)

where \(A=u^{n}\), \(B=u_{x}^{n}\).

Assume the solutions of (29) to be as follows:

$$ u^{n} ( x_{m} ) =\xi^{n}e^{\ell\varphi m},\quad m=m-1,m,m+1, $$
(30)

where is the imaginary unit and φ is real. Firstly, substituting the Fourier mode (30) into the recurrence relationship (29), we obtain

$$\begin{aligned}& \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) } \sum_{k=0}^{n} \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] \bigl[ \xi^{n-k+1}-\xi^{n-k} \bigr] \bigl( e^{i ( m-1 ) \varphi}+e^{im\varphi}+e^{i ( m+1 ) \varphi} \bigr) \\& \quad {}+ \bigl( 2+6AB\xi^{n+1} \bigr) \bigl( e^{i ( m-1 ) \varphi }+e^{im\varphi}+e^{i ( m+1 ) \varphi} \bigr) \\& \quad {} + \bigl( 2A+3A^{2} \bigr) \xi^{n+1} \bigl( \alpha_{m-1}^{ ( n+1,m-1 ) }e^{i ( m-1 ) \varphi}+\alpha_{m}^{ ( n+1,m ) }e^{im\varphi}+ \alpha_{m+1}^{ ( n+1,m+1 ) }e^{i ( m+1 ) \varphi} \bigr) \\& \quad {} -3\xi^{n}A^{2} \bigl( \alpha_{m-1}^{ ( n,m-1 ) }e^{i ( m-1 ) \varphi}+ \alpha_{m}^{ ( n,m ) }e^{im\varphi }+\alpha _{m+1}^{ ( n,m+1 ) }e^{i ( m+1 ) \varphi} \bigr) \\& \quad {} -\xi^{n+1} \bigl( \beta_{m-1}^{ ( n+1,m-1 ) }e^{i ( m-1 ) \varphi}+ \beta_{m}^{ ( n+1,m ) }e^{im\varphi }+\beta _{m+1}^{ ( n+1,m+1 ) }e^{i ( m+1 ) \varphi} \bigr) \\& \quad {}-\xi^{n} \bigl( \beta_{m-1}^{ ( n,m-1 ) }e^{i ( m-1 ) \varphi}+ \beta_{m}^{ ( n,m ) }e^{im\varphi}+\beta _{m+1}^{ ( n,m+1 ) }e^{i ( m+1 ) \varphi} \bigr) =0. \end{aligned}$$
(31)

Next, let \(\xi^{n+1}=\zeta\xi^{n}\) and assume that \(\zeta=\zeta ( \varphi ) \) is independent of time. Then we easily obtain the following expression:

$$\begin{aligned}& \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) }\sum_{k=0}^{n} \bigl[ ( k+1 ) ^{1-\gamma}-k^{1-\gamma} \bigr] \bigl[ \xi^{n-k+1}-\xi^{n-k} \bigr] \bigl( e^{-i\varphi }+1+e^{i\varphi} \bigr) \\& \quad {} - \bigl( \beta_{m-1}^{ ( n,m-1 ) }e^{-i\varphi}+\beta _{m}^{ ( n,m ) }+\beta_{m+1}^{ ( n,m+1 ) }e^{i\varphi } \bigr) \\& \quad {} -3A^{2} \bigl( \alpha_{m-1}^{ ( n,m-1 ) }e^{-i\varphi }+ \alpha _{m}^{ ( n,m ) }+\alpha_{m+1}^{ ( n,m+1 ) }e^{i\varphi } \bigr) \\& \quad {} +\xi\bigl( \bigl( 2A+3A^{2} \bigr) \bigl( \alpha_{m-1}^{ ( n+1,m-1 ) }e^{-i\varphi}+\alpha_{m}^{ ( n+1,m ) }+ \alpha_{m+1}^{ ( n+1,m+1 ) }e^{i\varphi} \bigr) \\& \quad {} -\beta_{m-1}^{ ( n+1,m-1 ) }e^{-i\varphi}+ \beta_{m}^{ ( n+1,m ) }+\beta_{m+1}^{ ( n+1,m+1 ) }e^{i\varphi} \\& \quad {} + \bigl( 2+6AB\xi^{n+1} \bigr) \bigl( e^{-i\varphi}+1+e^{i\varphi } \bigr) \bigr)=0. \end{aligned}$$
(32)

Let us denote Eq. (32) as follows:

$$ \vert \zeta \vert = \biggl\vert \frac {X_{1}+iX_{2}}{Y_{1}+iY_{2}} \biggr\vert , $$
(33)

where

$$\begin{aligned}& \begin{gathered} X_{1} = \frac{2 ( \Delta t ) ^{-\gamma}}{\Gamma ( 2-\gamma ) }\sum_{k=0}^{n-1} \bigl[ ( k+1 ) ^{1-\gamma }-k^{1-\gamma} \bigr] \bigl[ \zeta^{-k}-\zeta^{-k-1} \bigr] ( 2\cos \varphi+1 ) \\ \hphantom{X_{1} ={}}{}-\cos\varphi \bigl( 3A^{2} ( a+c ) -h-m \bigr) -3A^{2}b-p, \\ X_{2} = \bigl( 3A^{2} ( a-c ) -h+m \bigr) \sin\varphi, \\ Y_{1} = \cos ( \varphi ) \bigl( \bigl( 2+3A^{2} \bigr) (d+f)+4+12AB-n-r\bigr)+ \bigl( 2-3A^{2} \bigr) b-k, \\ Y_{2} = \bigl( \bigl( 2-3A^{2} \bigr) ( a-c ) -h+m \bigr) \sin ( \varphi ) \end{gathered} \end{aligned}$$

for

$$\begin{aligned}& \alpha_{m-1}^{ ( n,m-1 ) }=a, \qquad \alpha_{m}^{ ( n,m ) }=b, \qquad \alpha_{m+1}^{ ( n,m+1 ) }=c, \\ & \alpha _{m-1}^{ ( n+1,m-1 ) }=d,\qquad \alpha_{m}^{ ( n+1,m ) }=e, \qquad \alpha_{m+1}^{ ( n,m+1 ) }=f, \\ & \beta_{m-1}^{ ( n,m-1 ) }=h, \qquad \beta_{m}^{ ( n,m ) }=k, \qquad \beta_{m+1}^{ ( n,m+1 ) }=m, \\ & \beta_{m-1}^{ ( n+1,m-1 ) }=n, \qquad \beta_{m}^{ ( n+1,m ) }=p, \qquad \beta_{m+1}^{ ( n+1,m+1 ) }=r. \end{aligned}$$

Therefore, we get

$$ \vert \zeta \vert ^{2}=\frac{X_{1}^{2}+X_{2}^{2}}{Y_{1}^{2}+Y_{2}^{2}}. $$
(34)

If \(\vert \zeta \vert \leq1\) conditional is satisfied, then we see that the proposed method is unconditionally stable.

3.2 \(L_{2}\) and \(L_{\infty}\) error norms

For the test problem used in the present study, numerical solutions of Eq. (2) are computed with help of Mathematica software. Both \(L_{2}\) error norms are given by

$$ \mathit{L}_{2}= \bigl\Vert U^{\mathrm{exact}}-U_{N} \bigr\Vert _{2}=\sqrt{h\sum _{J=0}^{N} \bigl\vert U_{j}^{\mathrm{exact}}- ( U_{N} ) _{j} \bigr\vert ^{2}} $$

and \(L_{\infty}\) error norm is given by

$$ \mathit{L}_{\infty}= \bigl\Vert U^{\mathrm{exact}}-U_{N} \bigr\Vert _{\infty }=\max_{j} \bigl\vert U_{j}^{\mathrm{exact}}- ( U_{N} ) _{j} \bigr\vert . $$

They are calculated to show the accuracy of the results.

3.3 Test problem

For \(\lambda=0.834\), \(\mu=1\), \(-\gamma=0.8\), the exact solution of the fractional combined KdV–mKdV equation is given as follows:

$$\begin{aligned} u(x,t) =&\frac{1}{6} ( -2-3\sqrt{2}\lambda ) \\ &{}-\sqrt{2}\mu \biggl( \frac{1}{-\frac{\mu}{\lambda}+\cosh( ( x+\frac {1}{6} ( 2-3\lambda^{2} ) t ) \lambda)-\sinh( ( x+\frac {1}{6} ( 2-3\lambda^{2} ) t ) \lambda)} \biggr) \end{aligned}$$

for \(t\geq t_{0}\), \(0\leq x\leq1\).

In our computations, the linearization technique has been applied for the numerical solution of the test problem. Then, the numerical tests are performed using the radial basis function GA, IQ, \(IMQ\), MQ. The collocation matrix does not become ill-conditional during the run algorithms with GA radial basis functions for \(c=10^{-15}\) at Eq. (2). In Table 1, the error norms \(L_{2}\) and \(L_{\infty}\) of numerical solutions with IQ basis are compared for \(c=10^{-15}\) and \(\Delta t=0.01\) at times \(t=1\). In Fig. 3, the numerical solutions obtained by IQ, \(IMQ\), MQ bases are compared for \(h=0.1\) at times \(t=1\).

Figure 3
figure 3

Comparison of the exact solution and the obtained solutions using IQ, MQ, IMQ bases for \(h=0.1\)

Table 1 Comparison of the error norms \(L_{2}\) and \(L_{\infty}\) of the obtained solution using IQ radial basis for \(h=0.1\)and \(h=0.01\) at \(c=10^{-15}\), \(\Delta t=0.01\)

We obtain good results for \(h=0.1\). However, we do not get better results for \(h=0.01\). Furthermore, we cannot find any result as the number of nodes increases.

4 Geometry of the exact solution

The geometry of the exact solutions of various equations has been intensely studied by different authors in various ways (cf. [4044]). In this section, we are going to investigate the exact solution and the numerical solutions in the 3-dimensional space-time known as Lorentz–Minkowski space \(\mathbb{R}_{1}^{3}\). The main reason for choosing to work in this space is that the Lorentz–Minkowski space plays an important role in both special relativity and general relativity with space coordinates and time coordinates.

First, we need to recall some basic facts and notations in \(\mathbb{R}_{1}^{3}\) (cf. [4549]).

Let \(X= ( x_{1},x_{2},x_{3} ) \) and \(Y= ( y_{1},y_{2},y_{3} ) \) be any two vector fields in \(\mathbb{R}_{1}^{3}\). Then inner product of X and Y is defined by

$$ \langle X,Y \rangle=x_{1}y_{1}+x_{2}y_{2}-x_{3}y_{3}. $$
(35)

Note that a vector field X is called

  1. (i)

    a timelike vector if \(\langle X,X \rangle<0\),

  2. (ii)

    a spacelike vector if \(\langle X,X \rangle>0\),

  3. (iii)

    a lightlike \(( \text{or degenerate} ) \) vector if \(\langle X,X \rangle=0\) and \(X\neq0\).

Thus, the inner product in \(\mathbb{R}_{1}^{3}\) splits each vector field into three categories, namely (i) spacelike, (ii) timelike, and (iii) lightlike (degenerate) vectors. The category is known as causal character of a vector. The set of all lightlike vectors is called null cone. Furthermore, the norm of a vector X is defined by its causal character as follows:

  1. (i)

    \(\Vert X \Vert =\sqrt{ \langle X,X \rangle}\) if X is a spacelike vector,

  2. (ii)

    \(\Vert X \Vert =-\sqrt{ \langle X,X \rangle}\) if X is a timelike vector.

Let X be a unit timelike vector and \(e=(0,0,1)\) in \(\mathbb{R}_{1}^{3}\). Then X is called

  1. (i)

    a timelike future pointing vector if \(\langle X,e \rangle>0\),

  2. (ii)

    a timelike past pointing vector if \(\langle X,e \rangle <0\).

Now, let \(r ( x,t ) \) be a surface in \(\mathbb{R}_{1}^{3}\). Then the normal vector N at a point in \(r ( x,t ) \) is given by

$$ N=\frac{r_{x}\wedge r_{t}}{ \Vert r_{x}\wedge r_{t} \Vert }, $$
(36)

where ∧ denotes the wedge product in \(\mathbb{R}_{1}^{3}\). A surface is called

  1. (i)

    a timelike surface if N is spacelike,

  2. (ii)

    a spacelike surface if N is timelike,

  3. (iii)

    a lightlike (or degenerate) surface if N is lightlike.

We note that a point is called regular if \(N\neq0\) and singular if \(N=0\).

Now, let us consider a surface given by

$$ r ( x,t ) = \bigl( x,t,u ( x,t ) \bigr) , $$
(37)

where \(u ( x,t )\) is the exact solution of the fractional combined KdV–mKdV equation given by

$$\begin{aligned} u ( x,t ) =&\frac{1}{6} ( -2+3\sqrt{2}\lambda ) \\ &{} +\sqrt{2}\mu \biggl( \frac{1}{-\frac{\mu}{\lambda}+\cosh( ( x+\frac{1}{6} ( 2-3\lambda^{2} ) t ) \lambda)-\sinh( ( x+\frac{1}{6} ( 2-3\lambda^{2} ) t ) \lambda)} \biggr) . \end{aligned}$$

In view of (36), the normal vector field of \(r ( x,t ) \) becomes

$$\begin{aligned} N ( x,t ) =&(e^{\frac{2\lambda t}{3}-\lambda^{3}t+2\lambda x}\bigl(18e^{\lambda^{3}t-\frac{2}{3}\lambda ( t+3x ) }\lambda ^{4}-72e^{\frac{\lambda t}{3}-\frac{\lambda^{3}t}{2}t+\lambda x}\lambda\mu ^{3} \\ &{}+18e^{\frac{2}{3}\lambda t-\lambda^{3}t+2\lambda x}\mu^{4}-\lambda ^{2}\mu \bigl(72e^{\frac{1}{6}\lambda ( ( -2+3\lambda^{2} ) t-6x ) }\lambda \\ &{}+\bigl(-108+40\lambda^{4}-12\lambda^{6}+9\lambda^{8} \mu\bigr)\bigr)\bigr)\bigr)/ \bigl( 18 \bigl( \lambda -e^{\lambda ( \frac{1}{6} ( 2-3\lambda^{2} ) t+x ) }\mu \bigr) ^{4} \bigr). \end{aligned}$$
(38)

From (38), it is clear that \(r ( x,t ) \) is a regular surface, that is, every point of it is a regular point.

As a consequence of the above facts, we immediately get the following.

Corollary 4.1

For the node points, we have Table 2.

Table 2 Classification of \(r(x,t)\) surface at node points

Remark 4.2

From Table 2, we see that the surface \(r(x,t)\) contains at least one degenerate point near \(x=0.94\). As the number of node points increases, we approach degenerate points. Therefore, numerical solutions become instable when the number of node points increases.

5 Gaussian curvature of node points

Another important fact for a surface is to compute the Gaussian curvature which is an intrinsic character of it. The Gaussian curvature is the determinant of the shape operator. For a surface \(r ( x,t ) \), we shall apply the following useful way to compute the Gaussian curvature:

Consider \(\langle N,N\rangle=\varepsilon\|N\|\), where \(\varepsilon =\mp1\). Let us define

$$ E= \langle r_{x},r_{x} \rangle,\qquad F= \langle r_{x},r_{t} \rangle,\qquad G= \langle r_{t},r_{t} \rangle $$
(39)

and

$$ e= \langle u_{xx}, N \rangle,\qquad f= \langle u_{xt},N \rangle, \qquad g= \langle u_{tt},N \rangle. $$

Then the Gaussian curvature \(K ( p ) \) at a point p of a surface satisfies

$$ K ( p ) =\varepsilon\frac{eg-f^{2}}{EG-F^{2}}. $$
(40)

We note that

  1. (i)

    \(K ( p ) >0\) means that the surface \(r ( x,t ) \) is shaped like an elliptic paraboloid near p. In this case, p is called an elliptic point.

  2. (ii)

    \(K ( p ) <0\) means that the surface \(r ( x,t ) \) is shaped like a hyperbolic paraboloid near p. In this case, p is called a hyperbolic point.

  3. (iii)

    \(K ( p ) =0\) means that the surface \(r ( x,t ) \) is shaped like a parabolic cylinder or a plane near p. In this case, p is called a parabolic point.

Now, let us consider the surface given in (37). From (39) and (40), by a straightforward computation, we get

$$\begin{aligned} K =&-\bigl(e^{\frac{4\lambda t}{3}+\lambda^{3}t+4\lambda x}\lambda ^{8} \bigl( 2-3 \lambda^{2} \bigr) ^{2}\bigl(-8+3\lambda^{2}\bigr) \bigl( 4+3\lambda ^{2} \bigr) \mu^{2}\bigl(e^{\lambda^{3}t} \lambda^{2} \\ &{}-e^{\frac{2}{3}\lambda ( t+3x ) }\mu^{2}\bigr)^{2}\bigr)/\bigl(2 \bigl(e^{\lambda ( t+\lambda ^{2}t+3x ) }\lambda^{2} \bigl( -108+40\lambda^{4}-12 \lambda^{6}+9\lambda^{8} \bigr) \mu^{2} \\ &{}-18e^{\frac{\lambda t}{3}+\lambda x}\mu^{4}+18e^{\frac{\lambda^{3}t}{2}}\lambda \bigl(e^{\frac{3\lambda^{3}t}{2}}\lambda ^{3}-4e^{\lambda ( ( \frac{1}{3}+\lambda^{2} ) t+x ) } \lambda^{2}\mu-4e^{\lambda ( t+3x ) }\mu^{3}\bigr)\bigr) \\ &{}\bigl(e^{\lambda ( t+\lambda^{2}t+3x ) }\lambda^{2}\bigl(108+40\lambda ^{4}-12\lambda^{6}+9\lambda^{8}\bigr) \mu^{2} \\ &{}+18e^{\frac{\lambda t }{3}+\lambda x}\bigl(e^{2\lambda^{3}t}\lambda^{4}-4e^{\frac{1}{6}\lambda ( 2+9\lambda^{2} ) t+\lambda x} \lambda^{3}\mu-4e^{\lambda t+\frac {\lambda^{3}t}{3}+3\lambda x}\lambda\mu^{3}+e^{\frac{4}{3}\lambda ( t+3x ) } \mu^{4}\bigr)\bigr)\bigr). \end{aligned}$$

Another important kind of curvatures is mean curvature which measures the surface tension from the surrounding space at a point. The mean curvature is a trace of the second fundamental form. For a surface \(r(x,t) \), we shall apply the following useful way to compute the mean curvature \(H(p)\):

$$ H(p)=\varepsilon\frac{1}{2}\frac{eG-2fF+gE}{EG-F^{2}}. $$
(41)

If \(H(p)=0\) for all points of \(r(x,t)\), then the surface is called minimal. Furthermore, if the value of the mean curvature at a point p receives at least a possible amount of tension from the surrounding space, then p is called ideal point. That is, if a point in a surface is affected as little as possible from the external influence, then it becomes ideal.

From (41), we obtain

$$\begin{aligned} H =&\bigl(3e^{\frac{\lambda t}{3}-\frac{\lambda^{3}t}{2}+\lambda x}\lambda ^{4} \bigl( 40-12 \lambda^{2}+9\lambda^{4} \bigr) \\ &{}\mu\bigl(\lambda+e^{\lambda ( \frac{1}{6} ( 2-3\lambda ^{2} ) t+x ) }\mu\bigr)\bigr)/\bigl(2\bigl( \lambda-e^{\lambda ( \frac{1}{6} ( 2-3\lambda^{2} ) t+x ) }\mu\bigr)^{3} \\ &{}\bigl(1/\bigl(\lambda-e^{\lambda ( \frac{1}{6} ( 2-3\lambda^{2} ) t+x ) }\mu\bigr)^{4}e^{\frac{2\lambda t}{3}-\lambda^{3}t+2\lambda x} \\ &{}\bigl(-18e^{\lambda^{3}t-\frac{2\lambda ( t+3x ) }{3}}\lambda ^{4}+72e^{\frac{\lambda t}{3}-\frac{\lambda^{3}t}{2}+\lambda x} \lambda \mu ^{3} \\ &{}-18e^{\frac{2\lambda t}{3}-\lambda^{3}t+2\lambda x}\mu^{4}+\lambda ^{2}\mu \bigl(72e^{\frac{1}{6}\lambda ( ( -2+3\lambda^{2} ) t-6x ) }\bigr)\lambda \\ &{}+\bigl(-108+40\lambda^{4}-12\lambda^{6}+9 \lambda^{8}\bigr)\mu\bigr)\bigr)\bigr)^{3/2}). \end{aligned}$$

As a consequence of the above facts, we get the following corollary:

Corollary 5.1

For the node points of \(r(x,t)\), we have Table 3.

Table 3 Curvatures of \(r(x,t)\) surface at node points

Remark 5.2

From Table 3, we see that if x approaches 0.94, then the values of Gauss curvature and mean curvature change remarkably. Therefore, there exists the maximum external influence near the point 0.94.

6 Conclusions

Using the \((1/G')\) expansion method, the exact solution \(r(x,t)\) of the fractional combined KdV–mKdV equation is obtained. The numerical solutions of the fractional combined KdV–mKdV equation are shown by using the collocation method. These solutions are compared with the exact solution. The computational efficiency and effectiveness of the proposed method were tested on a problem. The error norms \(L_{2}\) and \(L_{\infty}\) have been calculated. The obtained results show that the error norms are small during all computer runs for all bases except for MQ basis. It was proved that the present method is a particularly successful numerical scheme to solve the fractional combined KdV–mKdV equation. However, numerical solutions are more accurate for \(h=0.1\) than for \(h=0.01\). Therefore, casual character of the exact solution was expressed at the nodal points. From Tables 1 and 2, it was realized that the most accurate numerical solution occurred in the timelike case of \(r(x,t)\), and there exists at least one degenerate point near \(x=0.94\). Furthermore, from Table 3, it was realized that the most accurate numerical solution occurred at the elliptic points of \(r(x,t)\), and the ideal node point of \(r(x,t)\) is \(x=0\).