1 Introduction

The fractional differential equation (FDE) is one of the most important topics in the recent years, not only because it can be used for modeling real-life phenomena, but also it gives researchers a wide range as regards the material properties. The fractional-order models are more adequate than the previously used integer-order models [1, 2], because fractional-order derivatives and integrals enable the description of the memory and hereditary properties of different substances. A system of fractional partial differential equations is a tool with impact on modeling several phenomena in different fields, such as fluid mechanic, biology, finance and material science.

Finding the exact solution for a FDE is very difficult even for a the linear one, so approximate solutions are needed. The solution of the system of fractional partial differential equations was pointed out by several researchers such as Ertürk and Momani who applied the differential transform method [3]. Ghazanfari investigated the fractional complex transform method [4]. Jafari et al. presented a Laplace transform with the iterative method [5]. Ahmed et al. used the Laplace Adomian decomposition method and the Laplace variational iteration method [6].

Homotopy analysis method is one of the most effective methods for solving FDE [7, 8]. It can give a convergent series solution that depends on a convergent control parameter, and the series can be represented using various basis functions. The major drawback of the method is that for each term you have to solve a sub-differential equation or the evaluation of some sub-integration, which affects the speed and memory usage. So these limitations call for other efficient and practical algorithms. In this article, we use the Sumudu transformation to overcome these limitations, which further allows one to use and apply all of the HAM features such as choosing the initial guess, the control parameter, and the basis function. Sumudu transforms had been incorporated with other several methods such as the homotopy perturbation method [9], the Adomian decomposition method [10] and the homotopy analysis method [11, 12]. Comparing with the standard HAM the proposed method is capable for reducing the volume of the computational work while still maintaining the high accuracy of the numerical results, and therefore amounts to an improvement in the performance of the approach [13].

The rest of the paper is organized as follows. In Sect. 2, we review some facts about fractional derivative and the Sumudu transformation and then introduce the solution procedure in Sect. 3. The existence of the solution is given in Sect. 4. The convergence of the method is illustrated in Sect. 5. Numerical examples illustrating the theoretical results are provided in Sect. 6.

2 Preliminaries and notations

In this section, some definitions and properties of the fractional calculus and Sumudu transform are briefly mentioned. For more details see [1420].

2.1 Fractional calculus

We start with the following definition.

Definition 2.1

A real function \(f(t)\); \(t > 0\), is said to be in the space \(C_{\mu }\); \(\mu \in \Re \), if there exists a real number \(p > \mu \), such that \(f(t)=t^{p}f_{1}(t)\), where \(f_{1}(t)\in C(0,\infty )\), and it is said to be in the space \(C_{\mu }^{n}\) if and only if \(f^{(n)}\in C_{\mu }\); \(n \in \mathbb{N}\).

Now we can give the main definitions of fractional integrals and derivatives.

Definition 2.2

The Riemann–Liouville fractional integral operator \((J^{\alpha })\) of order \(\alpha \geq 0\), of a function \(f\in C_{\mu }\), \(\mu \geq -1\), is defined as

$$\begin{aligned}& J^{\alpha } f(t)= \frac{1}{\varGamma (\alpha )} \int _{0}^{t} { ( {t - s} )} ^{\alpha - 1} f(t) \,ds \quad (\alpha > 0), \end{aligned}$$
(1)
$$\begin{aligned}& J^{0} f(t)= f(t), \end{aligned}$$
(2)

where \(\varGamma (\alpha )\) is the well-known gamma function.

Definition 2.3

The fractional derivative of a function \(f\in C_{-1}^{n}\) in the Caputo sense is defined as

$$\begin{aligned}& D^{\alpha } f(t)= \frac{1}{\varGamma (n-\alpha )} \int _{0}^{t} { ( {t - s} )} ^{n-\alpha - 1} f^{(n)}(t)\,ds, \end{aligned}$$
(3)

where \(n-1< \alpha < n\) and \(n \in \mathbb{N}\).

We mention the following basic properties of fractional derivatives and integrals:

  1. 1

    If \(f\in C_{-1}^{n}\) for some \(n \in \mathbb{N}\), then \(D^{\alpha }f\) is well defined for all \(0\leq \alpha \leq n\) with \(D^{\alpha }f \in C_{-1}\).

  2. 2

    If \(f\in C_{\mu }^{n}\) for some \(\mu \geq -1\), then

    $$ \bigl(J^{\alpha }D^{\alpha }\bigr)f(t) = f(t) - \sum _{k = 0}^{n - 1} {f^{(k)} \bigl(0^{+} \bigr)} \frac{{t^{k} }}{{k!}}, $$
    (4)

    provided \(n-1\leq \alpha \leq n\).

  3. 3

    For all \(\gamma >\alpha \) one has

    $$ D^{\alpha }t^{\gamma }= \frac{\varGamma (\gamma +1)}{\varGamma (\gamma -\alpha +1)}t^{\gamma - \alpha }. $$

2.2 Sumudu transform

The Sumudu transform is given by [21]

$$ S\bigl[f(t)\bigr]=F(\eta )= \frac{1}{\eta } \int _{0}^{\infty }e^{\frac{-t}{\eta }}f(t) \,dt, $$
(5)

where \(f\in A\) with

$$ A=\bigl\{ f(t)\mid \exists M,\tau _{1},\tau _{2}>0, \bigl\vert f(t) \bigr\vert < Me^{ \frac{ \vert t \vert }{\tau _{j}}}, \text{if } t\in (-1)^{j} \times [0,\infty )\bigr\} . $$

The Sumudu transform possesses the following main properties:

  1. 1

    \(S[c]=c\) for any constant c;

  2. 2

    \(S[t^{m}]=\frac{\eta ^{m}}{\varGamma (m+1)}\) for any \(m>0\);

  3. 3

    \(S[\alpha f(t)\pm \beta g(t)]=\alpha S[f(t)]\pm \beta S[g(t)]\);

  4. 4

    For \(n-1<\alpha \leq n\), we have

    $$ S\bigl[D^{\alpha }_{t} f(t)\bigr]=\eta ^{-\alpha }S\bigl[f(t) \bigr]-\sum_{i=0}^{n-1}\eta ^{- \alpha +i}f^{(i)}\bigl(0^{+}\bigr). $$

The inverse Sumudu transform of a function \(F(\eta )\) is given by [21]

$$\begin{aligned} S^{-1}\bigl[F(\eta )\bigr] =&f(t)= \frac{1}{2\pi i} \int _{\gamma -i\infty }^{ \gamma +i\infty }e^{st}F\biggl( \frac{1}{s}\biggr)\frac{ds}{s} \\ =&\sum \operatorname{Residues} \biggl[\frac{e^{st}F(1/s)}{s} \biggr], \end{aligned}$$

which exists provided \(F(1/s)/s\) is a meromorphic function, with singularities s satisfying \(\mathit{Re}(s) < c\) for some constant c, and

$$ \biggl\vert \frac{F(1/s)}{s} \biggr\vert \leq MR^{-K} $$

for some positive constants R, M, and K.

3 Solution procedure

To express the solution by the proposed method, let us consider the fractional partial differential equation

$$ D^{\alpha }_{t} u(x,t) = N\bigl[u(x,t)\bigr], $$
(6)

where \(n-1<\alpha <n\) for positive integer n, subject to the initial conditions

$$ u(x,0)=f_{0}(x),\qquad \frac{\partial u(x,t)}{\partial t}\bigg|_{t=0}=f_{1}(x), \quad\quad \ldots,\qquad \frac{\partial ^{n-1} u(x,t)}{\partial t^{n-1}}\bigg|_{t=0}=f_{n-1}(x). $$
(7)

By taking the Sumudu transform for both sides of Eq. (6), we have

$$\begin{aligned}& \frac{S[u(x,t)]}{\eta ^{\alpha }}-\sum_{k=0}^{n-1}\eta ^{k-\alpha } \frac{\partial ^{k} u(x,t)}{\partial t^{k}}\bigg|_{t=0^{+}} =S\bigl[N\bigl[u(x,t)\bigr] \bigr], \end{aligned}$$
(8)
$$\begin{aligned}& S\bigl[u(x,t)\bigr]=\eta ^{\alpha }S\bigl[N\bigl[u(x,t) \bigr]\bigr]+\sum_{k=0}^{n-1}\eta ^{k} \frac{\partial ^{k} u(x,t)}{\partial t^{k}}\bigg|_{t=0^{+}} \end{aligned}$$
(9)
$$\begin{aligned}& \hphantom{ S\bigl[u(x,t)\bigr]}=\eta ^{\alpha }S\bigl[N\bigl[u(x,t)\bigr]\bigr]+g\bigl(\eta ,f_{i}(x)\bigr), \end{aligned}$$
(10)

where

$$ g\bigl(\eta ,f_{i}(x)\bigr)=f_{0}(x)+\eta f_{1}(x)\dots +\eta ^{n-1}f_{n-1}(x). $$

Now the main difficulty here is to find the solution \(u(x,t)\) by invoking the inverse Sumudu transform for Eq. (10), in particular for the nonlinear term \(\eta ^{\alpha }S[N[u(x,t)]]\). To tackle this, we can utilize the HAM by defining the homotopy map

$$\begin{aligned} (1-q)S\bigl[\phi (x,t;q)-u_{0}(x,t)\bigr]= \hbar q N_{1}\bigl[\phi (x,t;q)\bigr], \end{aligned}$$
(11)

where \(q\in [0,1]\) is an embedding parameter, ħ is the convergence control parameter, \(N_{1}[\phi (x,t;q)]\) the nonlinear operator given by

$$ N_{1}\bigl[\phi (x,t;q)\bigr]=S\bigl[\phi (x,t;q) \bigr]-\eta ^{\alpha }S\bigl[N\bigl[\phi (x,t;q)\bigr]\bigr]-g\bigl( \eta ,f_{i}(x)\bigr), $$
(12)

and \(\phi (x,t;q)\) is a Taylor series with respect to q defined by

$$\begin{aligned} \phi (x,t;q)=\sum_{m=0}^{\infty }u_{m}(x,t)q^{m}. \end{aligned}$$
(13)

We can note that, as q varies from 0 to 1, the zeroth-order deformation equation (13) varies from the initial guess \(\phi (x,t;0)=u_{0}(x,t)\) to the exact solution \(\phi (x,t;1)=u(x,t)\).

We have the following auxiliary result.

Theorem 3.1

The nonlinear term \(N[\phi (x,t;q)]\) satisfies the property

$$\begin{aligned} N\bigl[\phi (x,t;q)\bigr] =& \sum _{k=0}^{\infty } \Biggl[\frac{1}{k!} \frac{\partial ^{(k)}q}{\partial q^{k}}N \Biggl[\sum_{j=0}^{k} u_{j}(x,t)q^{j} \Biggr]_{q=0} \Biggr]q^{k}. \end{aligned}$$
(14)

Proof

The Maclaurin series of \(N[\phi (x,t;q)]\) with respect to q is given by

$$\begin{aligned} N\bigl[\phi (x,t;q)\bigr] =& \sum_{k=0}^{\infty } \frac{1}{k!} \frac{\partial ^{k}}{\partial q^{k}} \bigl\{ N\bigl[\phi (x,t;q)\bigr] \bigr\} _{q=0}q^{k} \\ =&\sum_{k=0}^{\infty }\frac{1}{k!} \frac{\partial ^{k}}{\partial q^{k}} \Biggl\{ N \Biggl[\sum_{j=0}^{\infty }u_{j}(x,t)q^{j} \Biggr] \Biggr\} _{q=0}q^{k} \\ =&\sum_{k=0}^{\infty }\frac{1}{k!} \frac{\partial ^{k}}{\partial q^{k}} \Biggl\{ N \Biggl[\sum_{j=0}^{k} u_{j}(x,t)q^{j}+ \sum_{j=k+1}^{\infty }u_{j}(x,t)q^{j} \Biggr] \Biggr\} _{q=0}q^{k} \\ =&\sum_{k=0}^{\infty }\frac{1}{k!} \frac{\partial ^{k}}{\partial q^{k}} \Biggl\{ N \Biggl[\sum_{j=0}^{k} u_{j}(x,t)q^{j} \Biggr] \Biggr\} _{q=0}q^{k}, \end{aligned}$$

which completes the proof. □

The next theorem presents the recursive formula of the unknown coefficients \(u_{m}(x,t)\).

Theorem 3.2

If we substitute Eq. (13) into the zeroth-order deformation equation (11), then the unknown functions\(u_{m}(x,t)\)are given by

$$ u_{m}(x,t) = (\hbar +\chi _{m})u_{m-1}(x,t) - \hbar \Biggl(S^{-1} \bigl[\eta ^{\alpha }S[R_{m-1}] \bigr]+(1-\chi _{m})\sum_{i=0}^{n-1}f_{i}(x) \frac{t^{i}}{i!} \Biggr), $$
(15)

where

$$ R_{m-1}=\frac{1}{(m-1)!} \frac{\partial ^{m-1} N_{1}[\phi (x,t;q)]}{\partial q^{m-1}}\bigg| _{q=0} $$
(16)

and

$$ \chi _{m}= \textstyle\begin{cases} 0, & \textit{if }m \leq 1, \\ 1, & \textit{if }m > 1,\end{cases} $$

for all\(m=1,2,3,\ldots \) .

Proof

By substituting the series in Eq. (13) in the left-hand side of Eq. (11) and equating the coefficients of the powers \(q^{i}\), \(i=1,2,\ldots, m\), we have

$$\begin{aligned}& q^{1}: \quad S\bigl[u_{1}(x,t)\bigr], \\ & q^{2}: \quad S\bigl[u_{2}(x,t)\bigr] -S \bigl[u_{1}(x,t)\bigr], \\ & \vdots \\ & q^{m}:\quad S\bigl[u_{m}(x,t)\bigr] -S \bigl[u_{m-1}(x,t)\bigr]=S\bigl[u_{m}(x,t)\bigr] -\chi _{m} S\bigl[u_{m-1}(x,t)\bigr]. \end{aligned}$$

With the aid of Theorem 3.1, the right-hand side can be written as

$$\begin{aligned}& q^{1}: \quad \hbar \bigl(S\bigl[u_{0}(x,t)\bigr] -\eta ^{\alpha }S[R_{0}]-g\bigl(v,f_{i}(x)\bigr) \bigr), \\ & q^{2}: \quad \hbar \bigl(S\bigl[u_{1}(x,t)\bigr] -\eta ^{\alpha }S[R_{1}] \bigr), \\& \vdots \\& \begin{aligned} q^{m}: \quad &\hbar \bigl(S\bigl[u_{m-1}(x,t) \bigr] -\eta ^{\alpha }S[R_{m-1}] \bigr) \\ &\quad =\hbar \bigl(S\bigl[u_{m-1}(x,t)\bigr] -\eta ^{\alpha }S[R_{m-1}]- (1-\chi _{m})g\bigl(v,f_{i}(x) \bigr) \bigr) \end{aligned} \end{aligned}$$

from which follows that

$$\begin{aligned} S\bigl[u_{m}(x,t)\bigr] =& (\hbar +\chi _{m})S \bigl[u_{m-1}(x,t)\bigr] \\ &{}-\hbar \bigl(\eta ^{\alpha }S[R_{m-1}]+(1-\chi _{m})g\bigl(v,f_{i}(x)\bigr) \bigr). \end{aligned}$$
(17)

Applying the inverse Sumudu transform for Eq. (17) yields

$$\begin{aligned} u_{m}(x,t) =& (\hbar +\chi _{m})u_{m-1}(x,t) \\ &{}-\hbar \Biggl(S^{-1} \bigl[\eta ^{\alpha }S[R_{m-1}] \bigr]+(1-\chi _{m}) \sum_{i=0}^{n-1}f_{i}(x) \frac{t^{i}}{i!} \Biggr), \end{aligned}$$

and this ends the proof. □

In practice, we define the mth approximate solution of the given problem as

$$ U_{M}(x,t)=\sum_{i=0}^{M} u_{i}(x,t), $$

while the residual error for the given solution is defined as

$$ \mathit{Res}_{M} = D^{\alpha }_{t} U_{M}(x,t) - N\bigl[U_{M}(x,t)\bigr]. $$
(18)

4 Existence and convergence results

In this section, we introduce the main results regarding the existence and convergence of the proposed algorithm.

Theorem 4.1

If optimal\(\hbar \neq 0\)exists, and\(u_{0}(x,t)\)is properly chosen in Eq. (15) in such a way that\(\Vert u_{n+1}(x,t) \Vert \leq \lambda \Vert u_{n}(x,t) \Vert \), where\(0\leq \lambda < 1\), then the series\(\sum_{n=0}^{\infty }u_{n}(x,t)\)converges uniformly, where\(\Vert \cdot \Vert \)denotes the usual infinite norm.

Proof

Let \(S_{n}\) be the sequence of partial sums \(S_{n}=\sum_{i=0}^{n} u_{i}(x,t)\). We show that the sequence \(\{S_{n}\}_{n=0}^{\infty }\) is Cauchy. First we observe that

$$ \Vert S_{n+1}-S_{n} \Vert = \Vert u_{n+1} \Vert \leq \lambda \Vert u_{n} \Vert \leq \lambda ^{2} \Vert u_{n-1} \Vert \leq \cdots \leq \lambda ^{n+1} \Vert u_{0} \Vert . $$

With the help of the above equation, for all \(n,m\in \mathbb{N} \) with \(n\geq m\), we have

$$\begin{aligned} \Vert S_{n}-S_{m} \Vert =& \Vert S_{n}-S_{n-1}+S_{n-1}-S_{n-2}+S_{n-2}+ \cdots -S_{m+1}+S_{m+1}-S_{m} \Vert \\ \leq & \Vert S_{n}-S_{n-1} \Vert + \Vert S_{n-1}-S_{n-2} \Vert +\cdots + \Vert S_{m+1}-S_{m} \Vert \\ \leq &\lambda ^{n} \Vert u_{0} \Vert +\lambda ^{n-1} \Vert u_{0} \Vert +\cdots +\lambda ^{m+1} \Vert u_{0} \Vert \\ =& \Vert u_{0} \Vert \bigl(\lambda ^{n}+\lambda ^{n-1}+\cdots +\lambda ^{m+1}\bigr), \end{aligned}$$

which leads to

$$\begin{aligned} \Vert S_{n}-S_{m} \Vert \leq &\frac{\lambda ^{m+1}}{1-\lambda } \Vert u_{0} \Vert , \end{aligned}$$
(19)

and consequently \(\Vert S_{n}-S_{m} \Vert \rightarrow 0\) as \(n,m\rightarrow \infty \). Thus, the sequence \(\{S_{n}\}\) is a Cauchy sequence, and hence it converges. □

Corollary 4.2

Suppose\(\sum_{i=0}^{\infty }u_{i}(x,t)\)converges to the solution\(u(x,t)\)of Eq. (6) and satisfies the hypotheses of Theorem 4.1, then the maximal absolute truncation error using the firstmterms in the domain\((x,t)\in \varOmega \)can be estimated as

$$ \sup_{\substack{(x,t)\in \varOmega }} \Biggl\vert u(x,t)-\sum _{i=0}^{m} u_{i}(x,t) \Biggr\vert \leq \frac{\lambda ^{m+1}}{1-\lambda } \varXi, $$
(20)

where\({\varXi =\sup_{\substack{(x,t)\in \varOmega }} \vert u_{0}(x,t) \vert }\).

Proof

Since \(S_{n}=\sum_{i=0}^{n} u_{i}(x,t)\), as \(n\rightarrow \infty \) the partial sum \(S_{n}\rightarrow u(x,t)\). Therefore, Eq. (19) can be written as

$$\begin{aligned} \bigl\Vert u(x,t)-S_{m} \bigr\Vert =& \Biggl\Vert u(x,t)-\sum _{i=0}^{m}u_{i}(x,t) \Biggr\Vert \\ \leq & \frac{\lambda ^{m+1}}{1-\lambda } \bigl\Vert u_{0}(x,t) \bigr\Vert \\ \leq & \frac{\lambda ^{m+1}}{1-\lambda }\sup_{ \substack{(x,t)\in \varOmega }} \bigl\vert u_{0}(x,t) \bigr\vert . \end{aligned}$$

Thus, the maximum absolute truncation error on Ω satisfies

$$\begin{aligned} \sup_{\substack{(x,t)\in \varOmega }} \Biggl\vert u(x,t)-\sum _{i=0}^{m} u_{i}(x,t) \Biggr\vert \leq \frac{\lambda ^{m+1}}{1-\lambda } \varXi , \end{aligned}$$

which ends the proof. □

It is worthy to mention that, for the initial value problem, we can choose the initial guess as \(u_{0}(x,t)=\sum_{i=0}^{M-1}f_{i}(x)\frac{t^{i}}{i!}\). Moreover, when \(N[\phi (x,t;q)]\) is a polynomial of \(\phi (x,t;q)\) and it s derivative and the nonhomogeneous term is analytic at the initial point then \(R_{m}\) can be written as \(\sum_{i=0}^{\infty }c_{i}(x)t^{ri}\) for \(r\in \Re \) and \(0<\alpha \leq 1\).

Theorem 4.3

If\(R_{m-1}\)in Eq. (15) is of the form\(R_{m-1}=c_{0}(x)+\sum_{n=1}^{M}c_{n}(x)t^{rn}\)for positive real numberr, then Eq. (6) subject to the initial conditions Eq. (7) admits at least one solution.

Proof

Using the properties of the Sumudu transform, we have

$$ \eta ^{\alpha }S[R_{m-1}]=\eta ^{\alpha } \Biggl(S \bigl[c_{0}(x)\bigr]+\sum_{n=1}^{M}c_{n}(x)S \bigl[t^{rn}\bigr] \Biggr)=c_{0}(x)\eta ^{\alpha }+\sum _{n=1}^{M} \frac{c_{n}(x)}{\varGamma [rn+1]}\eta ^{rn+\alpha }. $$

Since \(\alpha >0\), the Sumudu inverse for \(\eta ^{\alpha }S[R_{m-1}]\) exists and is given by

$$ S^{-1}[\eta ^{\alpha }S[R_{m-1}]=\varGamma [\alpha +1]c_{0}(x)t^{\alpha }+ \sum_{n=1}^{M} \frac{c_{n}(x)\varGamma [rn+\alpha +1]}{\varGamma [rn+1]}t^{rn+ \alpha }. $$

Hence, as \(M\rightarrow \infty \) the series \(u(x,t)=\lim_{M\rightarrow \infty }\sum_{n=0}^{M}u_{n}(x,t)\) becomes a solution of Eq. (6) and it satisfies the initial conditions by choosing \(u_{0}(x,t)=\sum_{i=0}^{M-1}f_{i}(x)\frac{t^{i}}{i!}\). This completes the proof. □

5 Numerical examples

In this section we present several examples to show the feasibility and robustness of the proposed technique.

5.1 Example 1

Consider the fractional linear system of PDE [6]

$$\begin{aligned}& D^{\alpha }_{t} u(x,t)-v_{x}(x,t)-u(x,t)+v(x,t)= -2, \end{aligned}$$
(21)
$$\begin{aligned}& D^{\beta }_{t} v(x,t)+u_{x}(x,t)-u(x,t)+v(x,t) = -2, \end{aligned}$$
(22)

where \(0<\alpha ,\beta \leq 1\), subject to the initial conditions

$$ u(x,0)=1+e^{x},\qquad v(x,0)=-1+e^{x}. $$

According to the solution procedure, we can choose \(u_{0}(x,t)=1+e^{x}\) and \(v_{0}(x,t)=e^{x}-1\). To determine \(R_{m-1}\), we substitute

$$ \phi _{u}(x,t;q)=\sum_{m=0}^{\infty }u_{m}(x,t)q^{m} \quad \text{and} \quad \phi _{v}(x,t;q)=\sum _{m=0}^{\infty }v_{m}(x,t)q^{m} $$

in Eq. (16) to give

$$\begin{aligned} Ru_{m-1} =& \frac{1}{(m-1)!} \frac{\partial ^{m-1} }{\partial q^{m-1}}\bigl[\bigl(\phi _{v}(x,t;q)\bigr)_{x}+ \phi _{u}(x,t;q)-\phi _{v}(x,t;q)-2\bigr]\big| _{q=0} \\ =&\frac{\partial v_{m-1}}{\partial x}+u_{m-1}-v_{m-1}-2(1-\chi _{m}) \end{aligned}$$
(23)

and

$$\begin{aligned} Rv_{m-1} =& \frac{1}{(m-1)!} \frac{\partial ^{m-1} }{\partial q^{m-1}}\bigl[-\bigl(\phi _{u}(x,t;q)\bigr)_{x}+ \phi _{u}(x,t;q)-\phi _{v}(x,t;q)-2\bigr]\big| _{q=0} \\ =&-\frac{\partial u_{m-1}}{\partial x}+u_{m-1}-v_{m-1}-2(1-\chi _{m}). \end{aligned}$$
(24)

Then the mth-order approximations are given by

$$\begin{aligned}& \begin{aligned}[b] u_{m} &= (\hbar +\chi _{m})u_{m-1} -\hbar S^{-1} \biggl[\eta ^{\alpha }S\biggl[ \frac{\partial v_{m-1}}{\partial x}+u_{m-1}-v_{m-1}-2(1- \chi _{m})\biggr] \biggr] \\ &\quad {}-\hbar (1-\chi _{m}) \bigl(1+e^{x}\bigr), \end{aligned} \end{aligned}$$
(25)
$$\begin{aligned}& \begin{aligned}[b] v_{m} &= (\hbar +\chi _{m})v_{m-1} -\hbar S^{-1} \biggl[\eta ^{\beta }S\biggl[- \frac{\partial u_{m-1}}{\partial x}+u_{m-1}-v_{m-1}-2(1-\chi _{m}) \biggr] \biggr] \\ &\quad{}- \hbar (1-\chi _{m}) \bigl(-1+e^{x}\bigr). \end{aligned} \end{aligned}$$
(26)

Then the first few terms of the series are

$$\begin{aligned}& u(x,t)=1+e^{x}+ \frac{e^{x} \hbar (\hbar ^{2}+3 \hbar +3 ) t^{\alpha }}{\varGamma (\alpha +1)}+ \frac{e^{x} \hbar ^{2} (2 \hbar +3) t^{2 \alpha }}{\varGamma (2 \alpha +1)}+ \frac{e^{x} \hbar ^{3} t^{3 \alpha }}{\varGamma (3 \alpha +1)}+\cdots , \\& v(x,t)=-1+e^{x}- \frac{e^{x} \hbar (\hbar ^{2}+3 \hbar +3 ) t^{\beta }}{\varGamma (\beta +1)}+ \frac{e^{x} \hbar ^{2} (2 \hbar +3) t^{2 \beta }}{\varGamma (2 \beta +1)}- \frac{e^{x} \hbar ^{3} t^{3 \beta }}{\varGamma (3 \beta +1)}+\cdots . \end{aligned}$$

To determine the region for which the solution is convergent, we plot the ħ–curve in Fig. 1. Clearly, the values of \(D_{t}^{0.99}u(0.9,0)\) and \(D_{t}^{0.99}v(0.9,0)\) do not change in the region \(-1.5\leq \hbar \leq -0.5\). For simplicity, we fix \(\hbar =-1\). Then the solution for Example 1 becomes

$$\begin{aligned}& u(x,t)=1+e^{x}-\frac{e^{x} t^{\alpha }}{\varGamma (\alpha +1)}+ \frac{e^{x} t^{2 \alpha }}{\varGamma (2 \alpha +1)}- \frac{e^{x} t^{3 \alpha }}{\varGamma (3 \alpha +1)}+\cdots =1+e^{x}E_{ \alpha ,1}\bigl(-t^{\alpha } \bigr), \\& v(x,t)=-1+e^{x}+\frac{e^{x} t^{\beta }}{\varGamma (\beta +1)}+ \frac{e^{x} t^{2 \beta }}{\varGamma (2 \beta +1)}+ \frac{e^{x} t^{3 \beta }}{\varGamma (3 \beta +1)}+\cdots =-1+e^{x}E_{ \beta ,1} \bigl(t^{\beta }\bigr), \end{aligned}$$

where \(E_{\gamma ,1}(z)=\sum_{k=0}^{\infty }\frac{z^{k}}{\varGamma (\gamma k+1)}\) is the Mittag-Leffler function which is the exact solution. We note that the S-HAM solution can generate the Laplace Adomian decomposition solution when \(\hbar =-1\) given by [6].

Figure 1
figure 1

ħ-curve using 15 terms of approximation for Example 1

5.2 Example 2

Consider the fractional coupled Burgers equations [22]

$$\begin{aligned}& D^{\alpha }_{t} u = u_{xx}+2uu_{x}-(uv)_{x}, \end{aligned}$$
(27)
$$\begin{aligned}& D^{\beta }_{t} v = v_{xx}+2vv_{x}-(uv)_{x}, \end{aligned}$$
(28)

subject to the initial conditions \(u(x,0)=v(x,0)=\cos x\). According to the S-HAM algorithm, we can choose \(u_{0}=v_{0}=\cos x\). The mth orders are given by

$$\begin{aligned}& \begin{aligned}[b] u_{m}&= (\chi _{m} + \hbar )u_{m-1} - \hbar u_{0} (1 - \chi _{m}) - S^{-1} \Biggl(\eta ^{\alpha }S \Biggl((u_{m-1})_{xx} \\ &\quad{} -2\sum_{k=0}^{m-1}(u_{k})_{x} u_{m-1-k}+\sum_{k=0}^{m-1}(u_{k})_{x}v_{m-1-k}+ \sum_{k=0}^{m-1}u_{k}(v_{m-1-k})_{x} \Biggr) \Biggr), \end{aligned} \end{aligned}$$
(29)
$$\begin{aligned}& \begin{aligned}[b] v_{m}&= (\chi _{m} + \hbar )v_{m-1} - \hbar v_{0} (1 - \chi _{m}) - S^{-1} \Biggl(\eta ^{\beta }S \Biggl((v_{m-1})_{xx} \\ &\quad{} -2\sum_{k=0}^{m-1}(v_{k})_{x} v_{m-1-k}+\sum_{k=0}^{m-1}(u_{k})_{x}v_{m-1-k}+ \sum_{k=0}^{m-1}u_{k}(v_{m-1-k})_{x} \Biggr) \Biggr). \end{aligned} \end{aligned}$$
(30)

The first few terms are of the series solutions are

$$\begin{aligned}& u=\cos (x) \biggl(1+ \frac{\hbar (\hbar +2) t^{\alpha }}{\varGamma (\alpha +1)}+ \frac{\hbar ^{2} t^{2 \alpha } (2 \sin (x)+1)}{\varGamma (2 \alpha +1)}- \frac{2 \hbar ^{2} \sin (x) t^{\alpha +\beta }}{\varGamma (\alpha +\beta +1)}+ \cdots \biggr), \\& v= \cos (x) \biggl(1+ \frac{\hbar (\hbar +2) t^{\beta }}{\varGamma (\beta +1)}+ \frac{\hbar ^{2} t^{2 \beta } (2 \sin (x)+1)}{\varGamma (2 \beta +1)}- \frac{2 \hbar ^{2} \sin (x) t^{\alpha +\beta }}{\varGamma (\alpha +\beta +1)}+ \cdots \biggr). \end{aligned}$$

With \(\alpha =\beta \) and \(\hbar =-1\), the solutions become

$$\begin{aligned}& u=\cos (x) \biggl(1-\frac{t^{\alpha }}{\varGamma (\alpha +1)}+ \frac{t^{2 \alpha }}{\varGamma (2 \alpha +1)}+\cdots \biggr)=\cos (x)E_{\alpha ,1}\bigl(-t^{\alpha }\bigr), \\& v=\cos (x) \biggl(1-\frac{t^{\alpha }}{\varGamma (\alpha +1)}+ \frac{t^{2 \alpha }}{\varGamma (2 \alpha +1)}+\cdots \biggr)=\cos (x)E_{\alpha ,1}\bigl(-t^{\alpha }\bigr). \end{aligned}$$

For \(\alpha \neq \beta \), we present the solution when \(\alpha =0.9\), \(\beta =0.8\) and \(\hbar =-0.2\) in Fig. 2 and its residual error in Fig. 3. We note that the exact solution of the fractional coupled Burger equation when (\(\alpha =\beta \)) is obtained via S-HAM but in the fractional variational iteration method (FVIM) the approximate one is only obtained; see [22]. Moreover, the S-HAM solution is discussed for \(t\in [0,1]\) whereas the FVIM solution is addressed for \(t\in [0,0.005]\), which is a small time. Figure 4 represent the S-HAM solution when \(\alpha =0.5\) and \(\beta =0.25\) for \(t\in [0,1]\) with \(\hbar =-0.324\).

Figure 2
figure 2

Solution Example 2 using \(\alpha =0.9\) and \(\beta =0.8\). (a) for \(u(x,t)\) and (b) for \(v(x,t)\)

Figure 3
figure 3

Residual error for Example 2 using 20th-order of approximation with \(\alpha =0.9\) and \(\beta =0.8\). (a) for resdual error of \(u(x,t)\) and (b) for resdual error of \(v(x,t)\)

Figure 4
figure 4

(a) \(u(x,t)\) and (b) \(v(x,t)\) of Example 2 using \(\alpha =0.5\) and \(\beta =0.25\)

5.3 Example 3

Consider the following nonlinear FPDE:

$$\begin{aligned}& D^{\alpha }_{t} u(x,t)-u_{x}(x,t)v(x,t)-u(x,t)= 1, \end{aligned}$$
(31)
$$\begin{aligned}& D^{\beta }_{t} v(x,t)+v_{x}(x,t)u(x,t)+v(x,t) = 1, \end{aligned}$$
(32)

where \(0<\alpha \leq 1\) and \(0<\beta \leq 1\), subject to the initial conditions \(u(x,0)=e^{x}\), \(v(x,0)=e^{-x}\). According to the solution procedure, we can choose \(u_{0}(x,t)=e^{x}\) and \(v_{0}(x,t)=e^{-x}\), the mth order is given by

$$\begin{aligned}& \begin{aligned}[b] u_{m} &= (\hbar +\chi _{m})u_{m-1} -\hbar (1-\chi _{m}) \bigl(e^{x}\bigr) \\ &\quad{}- \hbar S^{-1} \Biggl[\eta ^{\alpha }S\Biggl[-\sum _{j=0}^{m-1} \frac{\partial u_{j}}{\partial x}v_{m-1-i}-u_{m-1}+(1- \chi _{m})\Biggr] \Biggr], \end{aligned} \end{aligned}$$
(33)
$$\begin{aligned}& \begin{aligned}[b] v_{m} &= (\hbar +\chi _{m})v_{m-1}- \hbar (1-\chi _{m}) \bigl(e^{-x}\bigr) \\ &\quad{}- \hbar S^{-1} \Biggl[\eta ^{\beta }S\Biggl[\sum _{j=0}^{m-1} \frac{\partial v_{j}}{\partial x}u_{m-1-i}+v_{m-1}+(1- \chi _{m})\Biggr] \Biggr]. \end{aligned} \end{aligned}$$
(34)

Thus, the solution becomes

$$\begin{aligned}& u(x,t)=- \frac{\hbar ^{2} t^{\alpha +\beta }}{\varGamma (\alpha +\beta +1)}+ \frac{\hbar ^{2} t^{2 \alpha }}{\varGamma (2 \alpha +1)}+ \frac{e^{x} \hbar ^{2} t^{2 \alpha }}{\varGamma (2 \alpha +1)}- \frac{e^{x} \hbar ^{2} t^{\alpha }}{\varGamma (\alpha +1)}- \frac{2 e^{x} \hbar t^{\alpha }}{\varGamma (\alpha +1)}+e^{x}+\cdots , \\& v(x,t)= \frac{\hbar ^{2} t^{\alpha +\beta }}{\varGamma (\alpha +\beta +1)}- \frac{\hbar ^{2} t^{2 \beta }}{\varGamma (2 \beta +1)}+ \frac{e^{-x} \hbar ^{2} t^{2 \beta }}{\varGamma (2 \beta +1)}+ \frac{e^{-x} \hbar ^{2} t^{\beta }}{\varGamma (\beta +1)}+ \frac{2 e^{-x} \hbar t^{\beta }}{\varGamma (\beta +1)}+e^{-x}+\cdots . \end{aligned}$$

To determine the region for which the solution converges, we plot the ħ-curve in Fig. 5. It is clear that the values of \(D_{t}^{0.99}u(0.9,0)\) and \(D_{t}^{0.98}v(0.9,0)\) do not change in the region \(-1.5\leq \hbar \leq -0.5\). For simplicity, we fixed \(\hbar =-1\). When \(\alpha =\beta =1\) the solution becomes

$$\begin{aligned}& u(x,t)=e^{x}+t e^{x}+\frac{t^{2} e^{x}}{2}+ \frac{t^{3} e^{x}}{6}+ \cdots =e^{x+t}, \end{aligned}$$
(35)
$$\begin{aligned}& v(x,t)=e^{-x}-t e^{-x}+\frac{t^{2} e^{-x}}{2}- \frac{t^{3} e^{-x}}{6}+\cdots =e^{-x-t} . \end{aligned}$$
(36)

The solution for Example 3 is presented in Fig. 6 and the residual error is given in Fig. 7. Clearly, the present method can solve this kind of system of fractional partial differential equation that accurate within 10−7. Finally, the solution when \(\alpha =0.7\) and \(\beta =0.5\) is plotted in Fig. 8. Tables 13 present the solutions and their residual errors for several values of α and β along x and \(t \in [0,1]\) with proper selection of ħ. Via those tables, we can observe that the method is effective for these kinds of problems. Different from the published research [23], the present one considers this problem when \(\alpha =\beta \) and \(\alpha \neq \beta \).

Figure 5
figure 5

ħ-curve using 20 terms of approximation for Example 3

Figure 6
figure 6

Solution of Example 3 using \(\alpha =0.99\) and \(\beta =0.98\), (a) for \(u(x,t)\) and (b) for \(v(x,t)\)

Figure 7
figure 7

Residual error for (a) \(u(x,t)\) and (b) \(v(x,t)\) using 20th order of approximation with \(\alpha =0.99\) and \(\beta =0.98\) for Example 3

Figure 8
figure 8

Solution of Example 3 for \(\alpha =0.7\), \(\beta =0.5\), (a) for \(u(x,t)\) and (b) for \(v(x,t)\)

Table 1 Solution and residual error for Example 3 when \(\alpha =0.7\) and \(\beta =0.8\) at several values of x and t using 60 terms of the series (\(\hbar =-1\))
Table 2 Solution and residual error for Example 3 when \(\alpha =0.7\) and \(\beta =0.5\) at several values of x and t using 60 terms of the series (\(\hbar =-0.85\))
Table 3 Solution and residual error for Example 3 when \(\alpha =0.9\) and \(\beta =0.3\) at several values of x and t using 60 terms of the series (\(\hbar =-0.55\))

6 Conclusion

Our concern was to provide asymptotic solutions to the system of fractional partial differential equations, using a relatively new analytical technique, the homotopy-Sumudu transformation method. A sufficient condition for convergence is presented. Moreover, based on sufficient conditions for convergence, an estimation of the maximum absolute error is obtained. Several examples are presented to demonstrate the efficiency of the method. Besides, the calculations involved in the method are very simple and straightforward.