1 Introduction

The importance of fractional differential equations (FDEs) is stimulated by the appearance of many scientific models that have a nonlocal dynamical property. It has been observed that such models possess a continuum flow due to the fractional derivative effect that can be characterized by a long-term memory in time (memory index). This memory index can be interpreted as a chaotic or bifurcation behavior of a certain phenomenon for a short time subject to past circumstances. For example, it has been observed that the universal electromagnetic, acoustic, and mechanical responses are influenced by a remnant memory which can be accurately modeled by the fractional diffusion wave equations [1]. Also, it has been shown that the propagation of mechanical diffusive waves in viscoelastic media can be identified by the fractional wave equation [2]. More applications that relate to Newtonian mechanics, electromagnetism, quantum mechanics, electrochemistry, signal and image processing, and biomedical engineering can be found in [310].

In general, it is not an easy task to extract an analytical solution for nonlinear fractional differential equations. Almost all attempts were developed by either finding numerical solutions over a specific range or considering few terms of an iterative computational series solution as an approximate. Such available methods are (He’s) variational iteration methods [11, 12], the iterative Laplace transform method [13], Adomian’s decomposition method [14], the differential transform method [15, 16], homotopy analysis/perturbation methods [17, 18], the homotopy analysis—Laplace transform method [19], Chebyshev/Jacobi/Legendre operational matrix methods [20], the fractional Lie group method [21], the generalized Taylor power series method [22, 23], and the residual power series method [2428].

The motivation of the current work is to explore a closed-form solution of a general homogeneous time-invariant fractional initial value problem in the normal form,

$$\begin{aligned}& D_{t}^{\alpha } \bigl[u(\overline{x},t) \bigr] = F \bigl(u( \overline{x},t) \bigr),\quad 0\leq t < R, \\& u(\overline{x},0) = f(\overline{x}), \end{aligned}$$

where \(D_{t}^{\alpha }\) is the Caputo fractional operator with \(\alpha \in (0,1]\), \(u(\overline{x},t)\) is an unknown function, F is an analytic differential operator in the variables \(\overline{x}= \langle x_{1},\ldots ,x_{m}\rangle \) that involves both linear and nonlinear terms, \(R\in \mathbb{R}\), and \(f(\overline{x})\in \mathcal{C}^{\infty } ( \mathbb{R}^{m} ) \). Classes of these equations include, but are not limited to, the Schrödinger equation, Korteweg–de Vries equation, Burgers–Fisher equation, Cauchy reaction–diffusion equation, Boussinesq equation, and Sharma–Tasso–Olver equation. In particular, we have successfully applied the present approach to finding closed-form solutions for various nonlinear time-fractional dispersive equations, namely \(K_{\alpha }(2,2)\), \(ZK_{\alpha }(2,2)\), \(DD_{\alpha }(1,2,2)\), and \(K_{\alpha }(2,2,1)\) equations. It should be pointed out here that the existence of mild solutions of the nonlocal problem of time-fractional evolution equations is extensively studied in [2938].

The organization of the current paper is as follows: In Sect. 2, we recall some necessary definitions and theorems regarding the fractional derivative and fractional power series. In Sect. 3, the solution to a general time-invariant fractional IVP is constructed with some related convergence results and error analysis. In Sect. 4, our approach is applied to various nonlinear time-fractional dispersive equations with graphical and numerical analysis to illustrate the adequacy of the proposed approach. Finally, some concluding remarks are given in Sect. 5.

2 Preliminaries

Many definitions and studies of fractional derivatives have been proposed in the literature. Probably this is due to the fact that no harmonious definition preserves all properties of the classical integer-order derivative. These definitions include the Grunwald–Letnikov, Riemann–Liouville, Weyl, Riesz and Caputo versions. However, in the Caputo case, the derivative of a constant function is zero and one can properly define the initial conditions for the fractional differential equations which can be handled by using an analogy with the classical integer-order case. For these reasons, we adopt the Caputo fractional derivative definition in this work.

Definition 2.1

A real function \(u(t)\), \(t>0\) is in the space \(\mathcal{C}_{\lambda \in \mathbb{R}}^{1}\) if there exists a real number \(a>\lambda \) such that \(u(t)=t^{a}v(t)\) where \(v(t)\in \mathcal{C}[0,\infty )\), and it is in the space \(\mathcal{C}_{\lambda }^{n}\) if \(u^{(n)}\in \mathcal{C}_{\lambda }^{1}\), \(n\in \mathbb{N}\).

Definition 2.2

The Riemann–Liouville fractional integral operator of order \(\alpha > 0\) associates with a real function \(u(t)\in \mathcal{C}_{ \lambda \geq -1}^{1}\) is defined as \(\mathcal{J}^{\alpha }_{t} [ u(t) ] =\frac{1}{ \Gamma (\alpha )}\int_{0}^{t} ( t-\tau ) ^{\alpha -1}u( \tau )\,d\tau \), and \(\mathcal{J}^{0}_{t}\) is an identity operator.

Definition 2.3

The Caputo time-fractional derivative of order \(\alpha >0\) of \(u(t)\in \mathcal{C}_{-1}^{n}\), \(n\in \mathbb{N}\) is defined as \(D^{\alpha }_{t} [ u(t) ] =\mathcal{J}^{n-\alpha }_{t} [ u ^{(n)}(t) ] \) if \(n-1<\alpha <n\) and \(D^{\alpha }_{t} [ u(t) ] =u ^{(n)}(t)\) if \(\alpha =n\). Similarly, for n being the smallest integer that exceeds α, the Caputo time-fractional derivative operator of order α is given as \(D^{\alpha }_{t} [ u(\overline{x},t) ] = \mathcal{J}^{n-\alpha }_{t} [ \frac{\partial^{n}u(\overline{x},t)}{ \partial t^{n}} ] \) if \(n-1<\alpha <n\) and \(D^{\alpha }_{t} [ u( \overline{x},t) ] =\frac{\partial^{n}u(\overline{x},t)}{\partial t^{n}}\) if \(\alpha =n\).

Remark 1

A direct implementation of the Caputo derivative yields \(D^{\alpha } _{t} [ t^{p} ] =\frac{\Gamma (p+1)}{\Gamma (p-\alpha +1)} t ^{p-\alpha }\) for \(p>0\) and \(D^{\alpha }_{t} [ c ] =0\) where c is a constant. Also, it is easy to see that the Caputo derivative is a left inverse of the Riemann–Liouville integral but not a right inverse. Specifically, for \(n-1<\alpha \leq n\), \(n \in \mathbb{N}\), and \(u(t)\in \mathcal{C}_{\lambda \geq -1}^{n}\) we have \(D^{\alpha }_{t} \mathcal{J}^{\alpha }_{t} [ u(t) ] =u(t)\) and \(\mathcal{J}^{ \alpha }_{t}D^{\alpha }_{t} [ u(t) ] =u(t)-\sum_{k=0}^{n-1}u ^{(k)}(0^{+})\frac{t^{k}}{k!}\), where \(t>0\).

It should be noted here that it suffices to consider the Caputo fractional derivative of order \(0<\alpha \leq 1\) since \(D^{\alpha } _{t} [ u(t) ] =D^{\alpha -(n-1)}_{t} [ u^{(n-1)}(t) ] \) for arbitrary order \(n-1<\alpha \leq n\), where \(\alpha -(n-1)\in (0,1]\).

Definition 2.4

([25])

A fractional power series (FPS) expansion is an infinite series about \(t=t_{0}\) of the form \(\sum_{k=0}^{\infty }c_{k}(t-t_{0})^{k \alpha }\) where \(0\leq n-1 <\alpha \leq n\), \(t \geq t_{0}\).

Theorem 2.5

[25] Suppose that \(u(t)\) has a FPS expansion about \(t_{0}\) as above for \(t_{0}\leq t\leq t_{0}+R\). If the \(D_{t}^{k \alpha } [ u(t) ] \) are continuous on \((t_{0}, t_{0}+R)\) for \(k\in \mathbb{N}^{*}\), then \(c_{k}=\frac{D_{t}^{k\alpha } [ u(t _{0}) ] }{\Gamma (k\alpha +1)}\) where \(D_{t}^{k\alpha }\) is the k-fold Caputo derivative and R is the radius of convergence.

Definition 2.6

([24])

A power series of the form

$$ \sum_{k=0}^{\infty }f_{k}( \overline{x}) t^{k\alpha }, $$
(2.1)

where \(\overline{x}\in I=I_{1}\times \cdots \times I_{m}\subset \mathbb{R}^{m}\), \(0<\alpha \leq 1\), and \(t\geq 0\) is called a multi-fractional power series about \(t=0\).

Theorem 2.7

([24])

Suppose that \(u(\overline{x},t)\) has a multi-fractional power series representation about \(t=0\) as above for \(\overline{x} \in I\) and \(0\leq t\leq R\). If \(D_{t}^{k\alpha } [ u(\overline{x},t) ] \) are continuous on \(I\times (0,R)\) for each \(k\in \mathbb{N}^{\ast }\), then \(f_{k}( \overline{x})=\frac{D_{t}^{k\alpha } [ u(\overline{x},0) ] }{ \Gamma (k\alpha +1)}\) where R is the radius of convergence.

3 Analytic solution of homogeneous time-invariant fractional IVP

As our approach depends mainly on constructing an analytical solution of the time-fractional differential equation under consideration, we first present, in a similar fashion to the classical power series, some essential convergence theorems pertaining to our proposed solution.

Theorem 3.1

Let \(\{f_{k}(\overline{x})\}_{k=0}\) be a sequence of functions \(f_{k}:I\rightarrow \mathbb{R}\). If (2.1) is convergent for some \(t=t_{0}>0\), then it is convergent for all \(t\in (0,t_{0})\).

Proof

Assume that (2.1) is convergent for \(t=t_{0}>0\). Then, for fixed \(\epsilon_{0}>0\), there exists \(N\in \mathbb{N}\) such that \(\vert f_{k}(\overline{x})t_{0}^{\alpha k} \vert <\epsilon_{0}\) for all \(k\geq N\). It follows that if \(k\geq N\), we have \(\vert f_{k}(\overline{x})t^{\alpha k} \vert <\epsilon_{0} ( \frac{t}{t_{0}} ) ^{\alpha k}\) for all \(\overline{x}\in I\) and \(t\in (0,t_{0}) \), which shows that \(\sum_{k=0}^{\infty }f_{k}(\overline{x}) t^{k\alpha }\) is absolutely convergent (and so convergent). □

We remark here that if \(f_{0}(\overline{x})\) is a bounded function on I, then the convergence of (2.1) at some \(t=t_{0}>0\) implies the convergence on \([0,t_{0})\).

Corollary 3.2

Let \(\{f_{k}(\overline{x})\}_{k=0}\) be a sequence of functions \(f_{k}:I\rightarrow \mathbb{R}\). If (2.1) is divergent for some \(t=t_{0}>0\), then it is divergent for all \(t>t_{0}\).

Proof

Suppose not. That is, (2.1) is convergent for some \(t>t_{0}\). Then from Theorem 3.1, it converges on \((0,t)\) and thus converges at \(t_{0}\), which is a contradiction. □

Corollary 3.3

Let \(\{f_{k}(\overline{x})\}_{k=0}\) be a sequence of functions \(f_{k}:I\rightarrow \mathbb{R}\). Then one of the following cases is true as regards (2.1):

\(p_{1}\)::

The series converges only at \(t=0\);

\(p_{2}\)::

the series converges for all \(t\geq 0\);

\(p_{3}\)::

there exists \(R>0\) (called the radius of convergence) such that (2.1) converges for all \(t\in (0,R)\) and diverges for all \(t>R\).

Proof

Suppose it is the case that both \(p_{1}\) and \(p_{2}\) are not valid. Then there exist \(\alpha , \beta \in \mathbb{R}^{+}\) such that (2.1) converges at \(t=\alpha \) and diverges at \(t=\beta \). Therefore, the set \(T=\{t>0:\sum_{k=0}^{\infty }f_{k}(\overline{x}) t^{k\alpha } \text{ converges}\}\) is nonempty and \(T\subseteq (0,\alpha )\) by Theorem 3.1. Thus \(R:=\text{sup} T\) exists. Now, if \(t>R\), then (2.1) is divergent and if \(0< t< R\), then, by the definition of the supremum there exists \(t< t_{0}\in T\) such that (2.1) is convergent at \(t_{0}\) and so by Theorem 3.1 convergent on \((0,t_{0})\). The other cases can be handled easily. □

Now, consider the following general homogeneous time-invariant fractional initial value problem:

$$\begin{aligned}& \begin{aligned} &D_{t}^{\alpha } \bigl[u( \overline{x},t) \bigr]=F \bigl(u(\overline{x},t) \bigr),\quad 0\leq t < R, \\ &u(\overline{x},0)=f(\overline{x}), \end{aligned} \end{aligned}$$
(3.1)

where \(D_{t}^{\alpha }\) is the Caputo fractional operator with \(\alpha \in (0,1]\), \(u(\overline{x},t)\) is an unknown function, F is an analytic differential operator in the variables \(\overline{x}= \langle x_{1},\ldots ,x_{m}\rangle \) that involves both linear and nonlinear terms, \(R\in \mathbb{R}\), and \(f(\overline{x})\in \mathcal{C}^{\infty } ( \mathbb{R}^{m} ) \). In our next theorem, we exhibit a parallel scheme of the Taylor series method to solve problem (3.1). The method gives an analytical solution in the form of convergent multi-fractional power series without the need for linearization, perturbation, or discretization of the variables. Instead of equating terms with the same degree of homogeneity, our approach depends recursively on time-fractional differentiation to obtain the unknown series coefficients.

Notation 3.4

We denote the coefficient extraction operator for a multi-fractional power series \(G (\overline{x},t)\), which extracts a constant multiple of the coefficient of \(t^{n\alpha }\) in G, by \([t^{n\alpha } ]_{G}\). More precisely, for \(n\geq 1\)

$$ \bigl[t^{n\alpha } \bigr]_{G}= \bigl[t^{n\alpha } \bigr]\sum _{k=0}^{\infty }g_{k}(\overline{x}) t^{k\alpha }=\Gamma (n\alpha +1 ) g _{n}(\overline{x}). $$
(3.2)

Note that, for a multi-fractional power series representation, \(G (\overline{x},t)=\sum_{k=0}^{\infty }g_{k}(\overline{x}) t^{k \alpha }\), we have

$$ D_{t}^{n\alpha } \bigl[ G(\overline{x},t) \bigr] | _{t=0}=\Gamma (n\alpha +1 )g_{n}(\overline{x})= \bigl[t^{n\alpha } \bigr]_{G}, $$
(3.3)

where \(D_{t}^{n\alpha }=D_{t}^{\alpha }\cdots D_{t}^{\alpha }\) (n times).

Theorem 3.5

Suppose that the solution \(u(\overline{x},t)\) of (3.1) has a convergent multi-fractional power series representation of the form (2.1) for \(\overline{x}\in I\subset \mathbb{R}^{m}\), and \(0\leq t\leq R\). If \(D_{t}^{n\alpha } [ u(\overline{x},t) ] \) are continuous on \(I\times (0,R)\) for \(n\in \mathbb{N}\), then the solution of (3.1) is given analytically by

$$ u(\overline{x},t)=f(\overline{x})+\sum _{n=1}^{\infty }\frac{ [t ^{(n-1)\alpha } ]_{F}}{\Gamma (n\alpha +1 )} t^{n\alpha }. $$
(3.4)

Proof

As \(u(\overline{x},t)\) satisfies the initial condition, we should have \(f_{0}(\overline{x})=f(\overline{x})\). Applying the operator \(D_{t}^{\alpha }\) to Eq. (3.1) \((n-1)\) times, using the linearity property of the Caputo operator, and Remark 1 we have for \(n\geq 1\)

$$\begin{aligned} D_{t}^{(n-1)\alpha } \bigl[F \bigl(u(\overline{x},t) \bigr) \bigr] &= D_{t}^{(n-1)\alpha } \bigl[D_{t}^{\alpha } \bigl[u( \overline{x},t) \bigr] \bigr] \\ &= D_{t}^{n\alpha } \Biggl[\sum_{k=0}^{\infty }f_{k}( \overline{x}) t^{k \alpha } \Biggr] \\ &=\sum_{k=n}^{\infty }\frac{\Gamma (k\alpha +1) f_{k}(\overline{x})}{ \Gamma ((k-n)\alpha +1 )} t^{(k-n)\alpha }, \end{aligned}$$
(3.5)

for all \(\overline{x}\in I\) and \(0\leq t\leq R\). In particular at \(t=0\), we obtain

$$ f_{n}(\overline{x}) \Gamma (n\alpha +1)= D_{t}^{(n-1)\alpha } \bigl[F \bigl(u(\overline{x},t) \bigr) \bigr] | _{t=0} $$
(3.6)

for \(\overline{x}\in I\), and hence

$$\begin{aligned} f_{n}(\overline{x}) &= \frac{D_{t}^{(n-1)\alpha } [F (u(\overline{x},t) ) ] | _{t=0}}{\Gamma (n\alpha +1)} \\ &= \frac{ [t^{(n-1)\alpha } ]_{F}}{\Gamma (n\alpha +1)} \end{aligned}$$
(3.7)

as required. □

As an immediate consequence of Theorems 2.7 and 3.5, we obtain the following generalized Taylor formula.

Corollary 3.6

Let \(u(\overline{x},t)\) be the solution of (3.1) under the same hypotheses of Theorem 3.5. Then \(u(\overline{x},t)\) is analytic at \(t=0\) in the sense of fractional power series. I.e.

$$ u(\overline{x},t)=\sum_{n=0}^{\infty } \frac{D_{t}^{n\alpha } [u( \overline{x},0) ]}{\Gamma (n\alpha +1)} t^{n\alpha }. $$
(3.8)

As an immediate special case of Theorem 3.5, we have the following explicit description of the solution coefficients in terms of the previous coefficient.

Corollary 3.7

If F is a linear differential operator with constant coefficients, i.e. \(F(u)=\sum_{j=1}^{m}\sum_{i=0}^{k}a_{ij}\frac{\partial^{i}u}{ \partial x_{j}^{i}}\) where \(a_{ij}\)’s are constants, then for \(n\geq 1\)

$$ f_{n}(\overline{x})=\frac{1}{\Gamma (n\alpha +1)}\sum _{j=1}^{m}\sum_{i=0}^{k}a_{ij} \frac{\partial^{i}}{\partial x_{j}^{i}} f_{n-1}( \overline{x}). $$
(3.9)

Remark 2

In practical terms to find \(f_{n}(\overline{x})\) for \(n\geq 1\), it is sufficient to substitute the \((n-1)\)-truncated series \(\sum_{k=0}^{n-1}f _{k}(\overline{x})t^{k\alpha }\) of \(u(\overline{x},t)\) in Eq. (3.7) since the remaining terms contain higher powers \(\mathcal{O}(t^{n\alpha })\) and \(D_{t}^{(n-1)\alpha } [\mathcal{O}(t ^{n\alpha }) ]= \mathcal{O}(t)=0\) when \(t=0\).

Due to the complexity related to this kind of equations, it is not always possible to find a general expression for the series coefficients. In such a case, the solution can be found approximately as a partial sum of the series, \(u_{n}(\overline{x},t)=\sum_{k=0}^{n}f _{k}(\overline{x}) t^{\alpha k}\) in some reasonable interval of t, and thus the overall errors can be made smaller by adding more new terms as shown in the following case.

Corollary 3.8

Let \(\{f_{k}(\overline{x})\}_{k=0}\) be a uniformly bounded sequence of functions \(f_{k}:I\rightarrow \mathbb{R}\). Then (2.1) is uniformly convergent for \(0\leq t \leq R<1\). Moreover, \(\Vert u(\overline{x},t)-u_{n}(\overline{x},t) \Vert \rightarrow 0\) as \(n\rightarrow \infty \).

Proof

By the uniform boundedness of \(\{f_{k}(\overline{x})\}_{k=0}\), there exists \(M>0\) such that \(\vert f_{k}(\overline{x}) \vert \leq M\) for all \(k\in \mathbb{N}\) and \(\overline{x}\in I\). Thus \(\vert f_{k}(\overline{x})t^{\alpha k} \vert \leq MR^{\alpha k}\). Since \(\sum_{k=0} ^{\infty }MR^{\alpha k}\) is convergent geometric series (with ratio \(R^{\alpha }<1\)), by the comparison test, we see that \(\sum_{k=0}^{ \infty }f_{k}(\overline{x}) t^{\alpha k}\) is uniformly convergent as desired. Moreover,

$$ \bigl\Vert u(\overline{x},t)-u_{n}(\overline{x},t) \bigr\Vert \leq \sum_{k=n+1}^{\infty}MR^{\alpha k}= \frac{MR^{(n+1)\alpha }}{1-R^{\alpha }}\xrightarrow{n \rightarrow \infty }0. $$
(3.10)

 □

4 Applications and discussions

The goal of this section is to verify the applicability of our proposed approach, derived from Theorem 3.5, in studying the memory effects due to the time-fractional derivative on various nonlinear dispersive equations. It should be noted here that all necessary calculations and graphical analysis have been done by using Mathematica 10 software.

Example 1

In [39], Rosenau and Hyman introduced a class of solitary waves with compact support (called compactons) to understand the role of nonlinear dispersion on pattern formation in liquid drops. These compactons are solitary waves free of exponential tails and they re-emerge with exactly the same coherent shape after a collision. Motivated by the importance of taking into account the memory effects due to time-fractional derivative, we first consider the time-fractional version \(K_{\alpha }(2,2)\) of the third-order Rosenau–Hyman equation, which reads

$$ D^{\alpha }_{t} u+a \bigl(u^{2} \bigr)_{x}+ \bigl(u^{2} \bigr)_{xxx}=0, $$
(4.1)

subject to the initial condition with compact support

$$ u(x,0)= \textstyle\begin{cases} \frac{4c}{3a}\cos^{2} ( \frac{\sqrt{a}x}{4} ) , & \vert x \vert \leq \frac{ 2\pi }{\sqrt{a}} , \\ 0, & \mbox{otherwise}, \end{cases} $$
(4.2)

where \(t\geq 0\), \(a\in \mathbb{R}^{+}\), and \(0<\alpha \leq 1\). In this case we have \(F(u)=- (a(u^{2})_{x}+(u^{2})_{xxx} )\). In accordance with Theorem 3.5, the proposed fractional power series solution of Eqs. (4.1) and (4.2) has the form

$$ u(x,t)= \frac{4c}{3a}\cos^{2} \biggl( \frac{\sqrt{a}x}{4} \biggr) +f_{1}(x) t^{\alpha }+f_{2}(x) t^{2\alpha }+f_{3}(x) t^{3\alpha }+\cdots . $$
(4.3)

Following Eq. (3.7) and taking into account Remark 2, we successively obtain the coefficients \(f_{n}(x)\) as follows:

f 1 ( x ) = + c 2 3 a Γ ( α + 1 ) sin ( a x 2 ) , f 2 ( x ) = c 3 6 Γ ( 2 α + 1 ) cos ( a x 2 ) , f 3 ( x ) = c 4 a 12 Γ ( 3 α + 1 ) sin ( a x 2 ) , f 4 ( x ) = + c 5 a 24 Γ ( 4 α + 1 ) cos ( a x 2 ) , f 5 ( x ) = + c 6 a 3 48 Γ ( 5 α + 1 ) sin ( a x 2 ) , f 6 ( x ) = c 7 a 2 96 Γ ( 6 α + 1 ) cos ( a x 2 ) , f 7 ( x ) = c 8 a 5 192 Γ ( 7 α + 1 ) sin ( a x 2 ) , f 8 ( x ) = + c 9 a 3 384 Γ ( 8 α + 1 ) cos ( a x 2 ) ,

and so on. In general, for \(n\geq 1\) we have

$$ \Gamma (n\alpha +1) f_{n}(x)= \textstyle\begin{cases} \frac{2c}{3a}(-1)^{k} ( \frac{c\sqrt{a}}{2} ) ^{2k}\cos ( \frac{\sqrt{a}x}{2} )& \mbox{if }n=2k, \\ \frac{2c}{3a}(-1)^{k} ( \frac{c\sqrt{a}}{2} ) ^{2k+1}\sin ( \frac{\sqrt{a}x}{2} )& \mbox{if }n=2k+1. \end{cases} $$
(4.4)

Therefore, in view of (3.4) we have the exact memory solution of \(K_{\alpha }(2,2)\) in closed form,

$$\begin{aligned} u(x,t) &= \frac{2c}{3a} \Biggl[ 1+\cos \biggl( \frac{\sqrt{a}x}{2} \biggr) \sum_{k=0}^{\infty }\frac{(-1)^{k}}{\Gamma (2k\alpha +1)} \biggl( \frac{c \sqrt{a}t^{\alpha }}{2} \biggr) ^{2k} \\ &\quad {}+\sin \biggl( \frac{\sqrt{a}x}{2} \biggr) \sum _{k=0}^{\infty }\frac{(-1)^{k}}{\Gamma ((2k+1)\alpha +1 )} \biggl( \frac{c\sqrt{a}t^{\alpha }}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{2c}{3a} \biggl[ 1+\cos \biggl( \frac{\sqrt{a}x}{2} \biggr) E_{2 \alpha ,1} \biggl( -\frac{c^{2}a}{4}t^{2\alpha } \biggr) \\ &\quad {}+\frac{c \sqrt{a}t^{\alpha }}{2}\sin \biggl( \frac{\sqrt{a}x}{2} \biggr) E_{2\alpha ,\alpha +1} \biggl( -\frac{c^{2}a}{4}t^{2\alpha } \biggr) \biggr], \end{aligned}$$
(4.5)

where \(E_{2\alpha ,1} ( \cdot ) \) and \(E_{2\alpha ,\alpha +1} ( \cdot ) \) are the two-parameter Mittag–Leffler functions. Similar versions of (4.1) were handled using homotopy analysis/perturbation methods [40, 41] and the reduced differential transform approach [40], where the obtained solutions were consistent with (4.5). Particularly when \(\alpha =1\), we have the exact solution for the classical \(K(2,2)\) Rosenau–Hyman equation [39]

$$\begin{aligned} u(x,t)& = \frac{2c}{3a} \Biggl[ 1+\cos \biggl( \frac{\sqrt{a}x}{2} \biggr) \sum_{k=0}^{\infty }\frac{(-1)^{k}}{(2k)!} \biggl( \frac{c\sqrt{a}t}{2} \biggr) ^{2k} \\ &\quad {}+ \sin \biggl( \frac{\sqrt{a}x}{2} \biggr) \sum _{k=0}^{\infty } \frac{(-1)^{k}}{(2k+1)!} \biggl( \frac{c\sqrt{a}t}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{2c}{3a} \biggl[ 1+\cos \biggl( \frac{\sqrt{a}x}{2} \biggr) \cos \biggl( \frac{c\sqrt{a}t}{2} \biggr) +\sin \biggl( \frac{\sqrt{a}x}{2} \biggr) \sin \biggl( \frac{c\sqrt{a}t}{2} \biggr) \biggr] \\ & = \frac{4c}{3a}\cos^{2} \biggl( \frac{\sqrt{a}(x-ct)}{4} \biggr) \end{aligned}$$
(4.6)

with compact support \(\vert x-ct\vert \leq \frac{2\pi }{\sqrt{a}}\).

Figure 1 shows the effect of different values of the fractional derivative order \(0<\alpha \leq 1\) on the cross section approximate solutions \(u_{4}(x,t)=\sum_{n=0}^{4}f_{n}(x)t^{n\alpha }\) of the equation for \(K_{\alpha }(2,2)\) with fixed \(t=0.5\) and \(c=a=1\) in the compact support \(\vert x-0.5\vert \leq 2\pi \). Clearly, \(u_{4}(x,0.5)\) for \(\alpha =1\) is in high agreement with the cross section of the exact solution for \(K(2,2)\) on its compact support. Furthermore, \(u_{4}(x,0.5)\) for different values of \(0<\alpha \leq 1\) are continuously homotopic, as α increases, to the corresponding one of the exact solution of \(K(2,2)\).

Figure 1
figure 1

The cross section \(t=0.5\) of \(K_{\alpha }(2,2)\) approximate solutions for different values of α with \(c=a=1\)

Table 1 shows the consecutive errors \(con_{n}(x,t)=\vert u_{n+1}(x,t)-u_{n}(x,t) \vert \) for \(n=4\) and \(n=10\). It is clear that we have a remarkable accuracy even with few terms and we can have more accuracy when n gets larger and/or t gets closer to zero. Also, we can see that the error approaches zero as the memory index α approaches one.

Table 1 The consecutive errors of \(K_{\alpha }(2,2)\) for different values \(0<\alpha <1\) at some points

Example 2

In [42, 43], Zakharov and Kuznetsov introduced an isotropic two-dimensional equation \(ZK(2,2)\) to describe the weakly nonlinear ion acoustic waves in a strongly magnetized lossless plasma. As our next example, we consider the time-fractional nonlinear dispersive equation \(ZK_{\alpha }(2,2)\),

$$ D^{\alpha }_{t} u+2 \bigl(u^{2} \bigr)_{x}+ \bigl(u^{2} \bigr)_{xxx}+ \bigl(u^{2} \bigr)_{yyx}=0, $$
(4.7)

subject to the initial condition with compact support

$$ u(x,y,0)= \textstyle\begin{cases} \frac{2 c}{3} \cos^{2} ( \frac{x+y}{4} ) , & \vert x+y\vert \leq 2\pi , \\ 0, & \mbox{otherwise}, \end{cases} $$
(4.8)

where \(t\geq 0\) and \(0<\alpha \leq 1\). From Theorem 3.5, the fractional power series solution for Eqs. (4.7) and (4.8) has the form

$$ u(x,y,t)=\frac{2 c}{3} \cos^{2} \biggl( \frac{x+y}{4} \biggr) +f_{1}(x,y)t ^{\alpha }+f_{2}(x,y)t^{2\alpha }+f_{3}(x,y)t^{3\alpha }+ \cdots . $$
(4.9)

Proceeding as before with straightforward calculations, the coefficients are obtained recursively as

f 1 ( x , y ) = + c 2 6 Γ ( α + 1 ) sin ( x + y 2 ) , f 2 ( x , y ) = c 3 12 Γ ( 2 α + 1 ) cos ( x + y 2 ) , f 3 ( x , y ) = c 4 24 Γ ( 3 α + 1 ) sin ( x + y 2 ) , f 4 ( x , y ) = + c 5 48 Γ ( 4 α + 1 ) cos ( x + y 2 ) , f 5 ( x , y ) = + c 6 96 Γ ( 5 α + 1 ) sin ( x + y 2 ) , f 6 ( x , y ) = c 7 192 Γ ( 6 α + 1 ) cos ( x + y 2 ) , f 7 ( x , y ) = c 8 384 Γ ( 7 α + 1 ) sin ( x + y 2 ) , f 8 ( x , y ) = + c 9 768 Γ ( 8 α + 1 ) cos ( x + y 2 ) ,

and so on. For \(n\geq 1\), we have the following general form for the unknown coefficients:

$$ \Gamma (n\alpha +1) f_{n}(x,y)= \textstyle\begin{cases} \frac{c}{3}(-1)^{k} ( \frac{c}{2} ) ^{2k} \cos ( \frac{x+y}{2} )&\mbox{ if }n=2k, \\ \frac{c}{3}(-1)^{k} ( \frac{c}{2} ) ^{2k+1}\sin ( \frac{x+y}{2} )& \mbox{ if }n=2k+1. \end{cases} $$
(4.10)

Therefore, the memory solution in closed form is given by

$$\begin{aligned} u(x,y,t)&= \frac{c}{3} \Biggl[ 1+\cos \biggl( \frac{x+y}{2} \biggr) \sum_{k=0}^{ \infty }\frac{(-1)^{k}}{\Gamma (2k\alpha +1)} \biggl( \frac{ct^{\alpha }}{2} \biggr) ^{2k} \\ &\quad {}+\sin \biggl( \frac{x+y}{2} \biggr) \sum _{k=0}^{\infty }\frac{(-1)^{k}}{ \Gamma ((2k+1)\alpha +1 )} \biggl( \frac{ct^{\alpha }}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{c}{3} \biggl[ 1+\cos \biggl( \frac{x+y}{2} \biggr) E_{2\alpha ,1} \biggl( -\frac{c^{2}}{4}t^{2\alpha } \biggr) \\ &\quad {} + \frac{ct^{\alpha }}{2} \sin \biggl( \frac{x+y}{2} \biggr) E_{2\alpha ,\alpha +1} \biggl( -\frac{c ^{2}}{4}t^{2\alpha } \biggr) \biggr] . \end{aligned}$$
(4.11)

In particular when \(\alpha =1\), we have the exact solution to the classical \(ZK(2,2)\) equation [44]

$$\begin{aligned} u(x,y,t) & = \frac{c}{3} \Biggl[ 1+\cos \biggl( \frac{x+y}{2} \biggr) \sum_{k=0}^{ \infty }\frac{(-1)^{k}}{(2k)!} \biggl( \frac{ct}{2} \biggr) ^{2k} \\ &\quad {}+\sin \biggl( \frac{x+y}{2} \biggr) \sum _{k=0}^{\infty } \frac{(-1)^{k}}{(2k+1)!} \biggl( \frac{ct}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{2c}{3}\cos^{2} \biggl( \frac{x+y-ct}{4} \biggr) . \end{aligned}$$
(4.12)

Figure 2 shows the effect of the memory index \(0<\alpha \leq 1\) on the surface approximate solutions \(u_{4}(x,y,t)\) of \(ZK_{\alpha }(2,2)\) with fixed \(t=0.5\) and \(c=1\). In some sense, \(u_{4}(x,y,0.5)\) for different values of α are continuously communicated to reach the corresponding one when \(\alpha =1\).

Figure 2
figure 2

The approximate solutions \(u_{4}(x,y,0.5)\) of \(ZK_{\alpha }(2,2)\) for different values of α with \(c=1\)

Table 2 shows the consecutive errors for \(n=4\) and \(n=10\). Apparently, we have a smaller error when t is relatively small, n is large, or α approaches one.

Table 2 The consecutive errors of \(K_{\alpha }(2,2)\) for different values \(0<\alpha <1\) at some points

Example 3

In [45], Rosenau proposed a dispersive dissipative entity \(DD(k,m,n)\) which combines the interaction between convection, dispersion and dissipation. In this example, we consider the time-fractional version \(DD_{\alpha }(1,2,2)\) (the classical one is known as the \(K(2,2)\)-Burger equation), which reads

$$ D^{\alpha }_{t} u- \bigl(u^{2} \bigr)_{x}+ \bigl(u^{2} \bigr)_{xxx}- \bigl(u^{1} \bigr)_{xx}=0, $$
(4.13)

subject to the initial condition \(u(x,0)=-2 ( 1+\tanh ( - \frac{x}{4} ) ) ^{-1}=- ( 1+e^{\frac{x}{2}} ) \), where \(t\geq 0\) and \(0<\alpha \leq 1\). Following Eq. (3.7), we recursively obtain the coefficients \(f_{n}(x)\) for \(n\geq 1\) as follows:

$$ \Gamma (n\alpha +1) f_{n}(x)= \textstyle\begin{cases} -\frac{1}{ 2^{n}}e^{\frac{x}{2}}&:n\mbox{ even}, \\ \frac{1}{ 2^{n}}e^{\frac{x}{2}}&:n\mbox{ odd}. \end{cases} $$
(4.14)

Therefore, in view of (3.4) we have the memory solution of \(DD_{\alpha }(1,2,2)\) in closed form,

$$\begin{aligned} u(x,t) &= -1-e^{\frac{x}{2}}\sum_{k=0}^{\infty } \frac{1}{\Gamma (2k \alpha +1)} \biggl( \frac{ t^{\alpha }}{2} \biggr) ^{2k}+e^{\frac{x}{2}} \sum_{k=0}^{\infty }\frac{1}{\Gamma ((2k+1)\alpha +1 )} \biggl( \frac{ t^{\alpha }}{2} \biggr) ^{2k+1} \\ & = -1+e^{\frac{x}{2}} \biggl[ \frac{ t^{\alpha }}{2}E_{2\alpha , \alpha +1} \biggl( \frac{ t^{2\alpha }}{4} \biggr) -E_{2\alpha ,1} \biggl( \frac{ t^{2\alpha }}{4} \biggr) \biggr] . \end{aligned}$$
(4.15)

Particularly with \(\alpha =1\), we have the exact solution \(u(x,t)=- ( 1+e^{\frac{x-t}{2}} ) =-2 ( 1+\tanh ( \frac{t-x}{4} ) ) ^{-1}\) for the classical \(DD(1,2,2)\) equation [46].

Figure 3 shows the behavior of the approximate solutions \(u_{4}(x,0.5)\) of \(DD_{\alpha }(1,2,2)\) for various values of \(0<\alpha \leq 1\). Evidently, \(u_{4}(x,0.5)\) when \(\alpha =1\) is in harmony with the exact solution in \(x\in [ \frac{-4\pi +1}{2},\frac{4\pi +1}{2} ] \). Moreover, \(u_{4}(x,0.5)\) for different values of α is continuously communicated until \(\alpha =1\) is reached. Thus a convenient solution is expected for various values of α.

Figure 3
figure 3

The cross section \(t=0.5\) of \(DD_{\alpha }(1,2,2)\) approximate solutions for different values of α

Example 4

In [47], Dey studied the role of the fifth-order dispersion term in the existence of the compacton solutions and as regards the soliton stability for the usual Korteweg–de Vries (KdV) equation. We next consider the time-fractional version \(K_{\alpha }(2,2,1)\) of the form

$$ D^{\alpha }_{t} u+ \bigl(u^{2} \bigr)_{x}- \bigl(u^{2} \bigr)_{xxx}+ \bigl(u^{1} \bigr)_{5x}=0 $$
(4.16)

subject to the initial condition \(u(x,0)=\frac{16c-1}{12}\cosh^{2} ( \frac{x}{4} ) \), where \(t\geq 0\) and \(0<\alpha \leq 1\). Following the formula (3.7), we recursively obtain the coefficients \(f_{n}(x)\), \(n\geq 1\) as follows:

$$ \Gamma (n\alpha +1) f_{n}(x)= \textstyle\begin{cases} \frac{16c-1}{24} ( \frac{c}{2} ) ^{n}\cosh ( \frac{x}{2} )&: n \mbox{ even}, \\ -\frac{16c-1}{24} ( \frac{c}{2} ) ^{n}\sinh ( \frac{x}{2} )& : n \mbox{ odd}. \end{cases} $$
(4.17)

Therefore in view of (3.4), the solution of \(K_{\alpha }(2,2,1)\) is given in closed form by

$$\begin{aligned} u(x,t) &= \frac{16c-1}{24} \Biggl[ 1+\cosh \biggl( \frac{x}{2} \biggr) \sum_{k=0}^{\infty }\frac{1}{\Gamma (2k\alpha +1)} \biggl( \frac{ ct^{ \alpha }}{2} \biggr) ^{2k} \\ &\quad {}- \sinh \biggl( \frac{x}{2} \biggr) \sum _{k=0} ^{\infty }\frac{1}{\Gamma ((2k+1)\alpha +1 )} \biggl( \frac{ ct ^{\alpha }}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{16c-1}{24} \biggl[ 1+\cosh \biggl( \frac{x}{2} \biggr) E_{2\alpha ,1} \biggl( \frac{ c^{2}t^{2\alpha }}{4} \biggr) \\ &\quad {}- \frac{ ct^{\alpha }}{2}\sinh \biggl( \frac{x}{2} \biggr) E_{2\alpha , \alpha +1} \biggl( \frac{ c^{2}t^{2\alpha }}{4} \biggr) \biggr] , \end{aligned}$$
(4.18)

which is identical to the solution obtained by using the homotopy perturbation method [48]. Particularly with \(\alpha =1\), we have the exact solution for the classical \(K(2,2,1)\) equation [49],

$$\begin{aligned} u(x,t)& = \frac{16c-1}{24} \Biggl[ 1+\cosh \biggl( \frac{x}{2} \biggr) \sum_{k=0} ^{\infty }\frac{1}{(2k)!} \biggl( \frac{ct}{2} \biggr) ^{2k} \\ &\quad {}- \sinh \biggl( \frac{x}{2} \biggr) \sum _{k=0}^{\infty }\frac{1}{(2k+1)!} \biggl( \frac{ct}{2} \biggr) ^{2k+1} \Biggr] \\ & = \frac{16c-1}{12}\cosh^{2} \biggl( \frac{x-ct}{4} \biggr) \end{aligned}$$
(4.19)

with compact support \(\vert x-ct\vert \leq 2\pi \).

Figure 4 exhibits the approximate solutions \(u_{4}(x,0.5)\) of \(K_{\alpha }(2,2,1)\) for various values of \(0<\alpha \leq 1\). Apparently, \(u_{4}(x,0.5)\) when \(\alpha =1\) is in harmony with the exact solution on its compact support. Further, \(u_{4}(x,0.5)\) for different values of α is continuously communicated until \(\alpha =1\) is reached.

Figure 4
figure 4

The cross section \(t=0.5\) of \(K_{\alpha }(2,2,1)\) approximate solutions for different values of α

5 Conclusion

In this paper, a hybrid version of the power series method is presented to handle a general time-invariant fractional initial value problem of the form (3.1). The solution of (3.1) is given analytically in the form of a convergent multi-fractional power series (2.1) without the need of any linearization, perturbation, or discretization methods. Several nonlinear dispersive examples were tested and the exact memory solutions were successfully obtained in closed form. In fact, the physical interpretation of these solutions is beyond the scope of this work. However, in some sense, the graphs of the n-term approximate memory solutions, labeled by the memory index \(0<\alpha \leq 1\), were continuously deformed as they represent the memory and hereditary effects.