1 Introduction

In this paper, we consider Riesz tempered fractional diffusion equation with a nonlinear source term

$$ \frac{\partial u(x,t)}{\partial t}={\kappa} \frac{\partial^{\alpha ,\lambda} u(x,t)}{\partial \vert x \vert ^{\alpha}}+g \bigl(x,t,u(x,t) \bigr), \quad (x,t) \in(a,b)\times(0,T], $$
(1.1)

with the initial and boundary conditions

$$\begin{aligned}& u(x,0)=\varphi(x), \quad x\in[a,b] , \end{aligned}$$
(1.2)
$$\begin{aligned}& u(a,t)=0,\qquad u(b,t)=0,\quad t \in [0,T], \end{aligned}$$
(1.3)

where \(1<\alpha< 2\), \(\lambda\geq0\), the diffusion coefficient κ is a positive constant, \(\varphi(x)\) is a known sufficiently smooth function, \(g(x,t,u)\) satisfies the Lipschitz condition

$$ \bigl\vert g(x,t,u)-g(x,t,\upsilon) \bigr\vert \leq L \vert u-\upsilon \vert ,\quad \forall u,\upsilon\in \mathbb{R},$$
(1.4)

here L is Lipschitz constant, and the Riesz tempered fractional derivative \(\frac{\partial^{\alpha,\lambda}u(x,y,t)}{ \partial \vert x \vert ^{\alpha} }\) is expressed as [1, 2]

$$ \frac{\partial^{\alpha,\lambda}u(x,t)}{ \partial \vert x \vert ^{\alpha} }=c_{\alpha} \bigl( {}_{a}^{R} D^{\alpha,\lambda }_{x}+{}_{x}^{R} D^{\alpha,\lambda}_{b} \bigr)u(x,t),$$
(1.5)

where \(c_{\alpha}=-\frac{1}{2\cos(\frac{\pi\alpha}{2})}\), \({}_{a}^{R} D^{\alpha,\lambda}_{x}\) and \({}_{x}^{R} D^{\alpha ,\lambda}_{b}\) stand for the left and right Riemann–Liouville tempered fractional derivatives which are defined as

$$\begin{aligned}& {}_{a}^{R} D^{\alpha,\lambda}_{x}u(x,t)={}_{a}^{R} D^{(\alpha ,\lambda)}_{x}u(x,t)-\lambda^{\alpha}u(x,t)-\alpha \lambda^{\alpha -1}\frac{\partial u(x,t)}{\partial x}, \end{aligned}$$
(1.6)
$$\begin{aligned}& {}_{x}^{R} D^{\alpha,\lambda}_{b}u(x,t)={}_{x}^{R} D^{(\alpha ,\lambda)}_{b}u(x,t)-\lambda^{\alpha}u(x,t)+\alpha \lambda^{\alpha -1}\frac{\partial u(x,t)}{\partial x}, \end{aligned}$$
(1.7)

where the symbols \({}_{a}^{R} D^{(\alpha,\lambda)}_{x}\) and \({}_{x}^{R} D^{(\alpha,\lambda)}_{b}\) are defined by

$$\begin{aligned}& {}_{a}^{R} D^{(\alpha,\lambda)}_{x}u(x,t)=e^{-\lambda x}{}_{a}^{R} D^{\alpha}_{x}e^{\lambda x}u(x,t)=\frac{e^{-\lambda x}}{\varGamma (2-\alpha)} \frac{\partial^{2}}{\partial x^{2}} \int_{a}^{x}e^{\lambda \xi}u(\xi,t) (x- \xi)^{1-\alpha}\,d\xi, \\& {}_{x}^{R} D^{(\alpha,\lambda)}_{b}u(x,t)=e^{\lambda x}{}_{x}^{R} D^{\alpha}_{b}e^{-\lambda x}u(x,t)=\frac{e^{\lambda x}}{\varGamma (2-\alpha)} \frac{\partial^{2}}{\partial x^{2}} \int _{x}^{b}e^{-\lambda\xi}u(\xi,t) ( \xi-x)^{1-\alpha}\,d\xi, \end{aligned}$$

where \({\varGamma}(\cdot)\) is Gamma function.

Moreover, if \(\lambda=0\), then the Riesz tempered fractional derivative will reduce to the usual Riesz fractional derivative (see e.g. [3,4,5,6,7,8]).

In recent years, differential equations with tempered fractional derivatives have widely been used for modeling many special phenomena, such as geophysics [9,10,11] and finance [12, 13] and so on. It has attracted many authors’ attention in constructing the numerical algorithm for tempered fractional partial differential equation (see e.g., [1, 2, 14,15,16,17,18,19,20,21,22,23,24]). Li and Deng proposed the tempered weighted and shifted Grünwald–Letnikov formula with second-order accuracy for Riemann–Liouville tempered fractional derivative in [24], and its approximation is applied in the numerical simulation of the tempered fractional Black–Scholes equation for European double barrier option by Zhang et al. [14]. Based on this approximation, Qu and Liang [15] constructed a Crank–Nicolson scheme for a class of variable-coefficient tempered fractional diffusion equation, and disscussed the stability and convergence. Yu et al. [16] proposed a third-order difference scheme for one side Riemann–Liouville tempered fractional diffusion equation and given the stability and convergence analysis. Yu et al. [19] constructed a fourth-order quasi-compact difference operator for Riemann–Liouville tempered fractional derivative and tested its effectiveness by numerical experiment. Zhang et al. [1] presented a modified second-order Lubich tempered difference operator for approximating the Riemann–Liouville tempered fractional derivative and verified its effectiveness by theoretical analysis and numerical results. The aim of this paper is to try to use the implicit midpoint method and the modified second-order Lubich tempered difference operator to construct a new numerical scheme, and to give a theoretical analysis of the numerical method.

The outline of this paper is arranged as follows. In Sect. 2, numerical scheme is proposed for solving Riesz tempered fractional diffusion equation with a nonlinear source term. Section 3 is devoted into the stability and convergence analysis. In Sect. 4, we use the proposed method (abbr. T-ML2) and the method (abbr. T-WSGL) in literature [24] to solve the test problems. Finally, we draw the conclusion in Sect. 5.

2 Numerical method

Let \(x_{i}=a+ih\), \(i=0,1,2,\ldots,M\), \(t_{n}=n\tau\), \(n=0,1,2,\ldots,N\), where \(h=(b-a)/M\) is spatial step size, \(\tau=T/N\) denotes the time step size. The exact solution and numerical solution at the point \((x_{i},t_{n})\) are denoted by \(u(x_{i},t_{n})\) and \(u^{n}_{i}\), respectively.

To discretize the Riemann–Liouville tempered fractional derivatives, we would introduce the modified second-order Lubich tempered difference operators \(\delta^{\alpha}_{x-}\) and \(\delta^{\alpha}_{x+}\) at the point \((x_{i},t_{n})\), which are defined as

$$\begin{aligned}& \begin{aligned} \delta^{\alpha}_{x-}u(x_{i},t_{n})&= \frac{1}{h^{\alpha}}\sum^{i+1}_{k=0}g^{(\alpha,\lambda)}_{k} u(x_{i-k+1},t_{n})-\frac {e^{h\lambda}}{h^{\alpha}} \biggl( \frac{3\alpha-2}{2\alpha}-\frac {2(\alpha-1)}{\alpha}e^{-h\lambda} \\ &\quad{} +\frac{\alpha-2}{2\alpha}e^{-2h\lambda} \biggr)^{\alpha }u(x_{i},t_{n}), \end{aligned} \\& \begin{aligned} \delta^{\alpha}_{x+}u(x_{i},t_{n})&= \frac{1}{h^{\alpha}}\sum^{M-i+1}_{k=0}g^{(\alpha,\lambda)}_{k} u(x_{i+k-1},t_{n})-\frac {e^{h\lambda}}{h^{\alpha}} \biggl( \frac{3\alpha-2}{2\alpha}-\frac {2(\alpha-1)}{\alpha}e^{-h\lambda} \\ &\quad{} +\frac{\alpha-2}{2\alpha}e^{-2h\lambda} \biggr)^{\alpha }u(x_{i},t_{n}), \end{aligned} \end{aligned}$$

where \(g^{(\alpha,\lambda)}_{k}\) are given as

$$ g^{(\alpha,\lambda)}_{k} = \textstyle\begin{cases} e^{h\lambda} ( \frac{3\alpha-2}{2\alpha} )^{\alpha} , & k=0,\\ \frac{4\alpha(1-\alpha)}{3\alpha-2} e^{-h\lambda}g^{(\alpha ,\lambda)}_{0} , & k=1,\\ \frac{1}{(3\alpha-2)k} \{ 4(1-\alpha)(\alpha-k+1)e^{-h\lambda }g^{(\alpha,\lambda)}_{k-1}+(\alpha-2)(2\alpha\\ \quad{} -k+2)e^{-2h\lambda}g^{(\alpha,\lambda)}_{k-2} \}, & k\geq2, \end{cases} $$

then we have the following lemma.

Lemma 2.1

([1]) If \(\widetilde{u}(x,t_{n})\in\mathscr{C} ^{2+\alpha}_{\lambda}(\mathbb {R})\) (\(1\leq n\leq N\)), for the fixed step size h, we have

$$ \begin{gathered} {}_{a}^{R} D^{(\alpha,\lambda)}_{x}u(x_{i},t_{n})- \lambda^{\alpha}u(x_{i},t_{n})=\delta^{\alpha}_{x-} u(x_{i},t_{n})+O \bigl(h^{2} \bigr),\quad 1\leq i \leq M-1,0\leq n \leq N, \\ {}_{x}^{R} D^{(\alpha,\lambda)}_{b}u(x_{i},t_{n})- \lambda^{\alpha}u(x_{i},t_{n})=\delta^{\alpha}_{x+} u(x_{i},t_{n})+O \bigl(h^{2} \bigr),\quad 1\leq i \leq M-1,0\leq n \leq N, \end{gathered} $$

where \(\widetilde{u}(x,t_{n})\) is a zero-extension of \(u(x,t_{n})\) with respect to x on \(\mathbb {R}\) which is defined as

$$ \widetilde{u}(x,t_{n})= \textstyle\begin{cases} u(x,t_{n}), & x\in[a,b],\\ 0, & \mathbb{R} \backslash[a,b]. \end{cases} $$

The fractional Sobolev space \(\mathscr{C}^{2+\alpha}_{\lambda }(\mathbb {R})\) is defined by

$$ \mathscr{C}^{2+\alpha}_{\lambda}(\mathbb{R})= \biggl\{ v\Big| v\in L_{1}(\mathbb{R}) , \int^{+\infty}_{-\infty} \bigl(\lambda^{2}+\varpi ^{2} \bigr)^{\frac{2+\alpha}{2}} \bigl\vert \widehat{v}(\varpi) \bigr\vert \,d\varpi< \infty \biggr\} , $$

where \(\widehat{v}(\varpi)\) is represented as the Fourier transformation of \(v(x)\) defined by

$$ \widehat{v}(\varpi)= \int^{+\infty}_{-\infty} e^{-i\varpi x}v(x)\,dx,\quad i^{2}=-1. $$

According to (1.5)–(1.7), we have

$$ \kappa\frac{\partial^{\alpha,\lambda} u(x_{i},t_{n})}{\partial \vert x \vert ^{\alpha}}=\delta^{\alpha}_{x} u(x_{i},t_{n})+O \bigl(h^{2} \bigr),$$
(2.1)

where

$$ \delta^{\alpha}_{x} = \kappa c_{\alpha}\bigl( \delta^{\alpha}_{x-} + \delta^{\alpha}_{x+} \bigr). $$
(2.2)

Using the implicit midpoint method to solve (1.1) at the point \((x_{i},t_{n})\), we find

$$ \begin{aligned}[b] u(x_{i},t_{n+1})&=u(x_{i},t_{n})+ \tau\kappa\frac{\partial^{\alpha ,\lambda} }{\partial \vert x \vert ^{\alpha}} \biggl( \frac {u(x_{i},t_{n+1})+u(x_{i},t_{n})}{2} \biggr) \\ &\quad {}+\tau g \biggl(x_{i},t_{n+\frac{1}{2}},\frac {u(x_{i},t_{n+1})+u(x_{i},t_{n})}{2} \biggr) +O \bigl(\tau^{3} \bigr), \\ &\quad 1\leq i \leq M-1,0\leq n \leq N.\end{aligned} $$
(2.3)

Applying (2.1) to discretize the Riesz tempered fractional derivative, we get

$$ \begin{aligned}[b] u(x_{i},t_{n+1})&=u(x_{i},t_{n})+ \tau\delta^{\alpha}_{x} \biggl( \frac {u(x_{i},t_{n+1})+u(x_{i},t_{n})}{2} \biggr) \\ &\quad {}+\tau g \biggl(x_{i},t_{n+\frac{1}{2}},\frac{u(x_{i},t_{n+1})+u(x_{i},t_{n})}{2} \biggr)+\tau\mathscr{R}_{i}^{n+\frac{1}{2}}, \\ &\quad 1\leq i \leq M-1,0 \leq n \leq N-1,\end{aligned} $$
(2.4)

where there exists a constant \(c_{1}\) such that

$$ \bigl\vert \mathscr{R}^{n+\frac{1}{2}}_{i} \bigr\vert \leq c_{1} \bigl( \tau^{2}+h^{2} \bigr), \quad1\leq i \leq M-1,0\leq n \leq N-1.$$
(2.5)

Omitting the error term \(\mathscr{R}^{n+\frac{1}{2}}_{i}\), we obtain the following numerical scheme for solving (1.1)–(1.3):

$$\begin{aligned}& u_{i}^{n+1}=u_{i}^{n}+ \tau\delta^{\alpha}_{x} u^{n+\frac{1}{2}}_{i}+\tau g^{n+\frac{1}{2}}_{i},\quad 1\leq i \leq M_{1}-1,0\leq n \leq N-1, \end{aligned}$$
(2.6)
$$\begin{aligned}& u_{i}^{0}=\varphi(x_{i}), \quad0\leq i\leq M, \end{aligned}$$
(2.7)
$$\begin{aligned}& u_{0}^{n}=0,\qquad u_{M}^{n}=0,\quad0\leq n\leq N, \end{aligned}$$
(2.8)

where \(u_{i}^{n+\frac{1}{2}}=\frac{u^{n+1}_{i}+u^{n}_{i}}{2}\), \(g^{n+\frac{1}{2}}_{i}=g( x_{i},t_{n+\frac{1}{2}},u_{i}^{n+\frac{1}{2}} )\).

Furthermore, the matrix form of (2.6) can be written as

$$ ( I-A )U^{n+1}= ( I+A )U^{n}+\tau g^{n+\frac{1}{2}},\quad 0\leq n \leq N-1,$$
(2.9)

where

$$ U^{n}= \bigl( u^{n}_{1},u^{n}_{2}, \ldots,u^{n}_{M-1} \bigr)^{T},\qquad g^{n+\frac{1}{2}}= \bigl( g^{n+\frac{1}{2}}_{1},g^{n+\frac{1}{2}}_{2}, \ldots ,g^{n+\frac{1}{2}}_{M-1} \bigr)^{T}, $$

here A is a Toeplitz matrix, which can be written as \(A=B+B^{T}\), where the matrix B is defined as

$$ B=\frac{\kappa c_{\alpha}\tau}{2h^{\alpha}} \begin{bmatrix} g^{(\alpha,\lambda)}_{1}+d & g^{(\alpha,\lambda)}_{0} \\ g^{(\alpha,\lambda)}_{2} & g^{(\alpha,\lambda)}_{1}+d & g^{(\alpha ,\lambda)}_{0}\\ \vdots&\vdots&\ddots&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots\\ g^{(\alpha,\lambda)}_{M-2} & g^{(\alpha,\lambda)}_{M-3} & g^{(\alpha,\lambda)}_{M-4} & \cdots& g^{(\alpha,\lambda)}_{1}+d & g^{(\alpha,\lambda)}_{0}\\ g^{(\alpha,\lambda)}_{M-1} & g^{(\alpha,\lambda)}_{M-2} & g^{(\alpha,\lambda)}_{M-3} & \cdots& g^{(\alpha,\lambda)}_{2} & g^{(\alpha,\lambda)}_{1}+d \end{bmatrix} , $$

where \(d=-e^{h\lambda} ( \frac{3\alpha-2}{2\alpha}-\frac {2(\alpha-1)}{\alpha}e^{-h\lambda}+\frac{\alpha-2}{2\alpha }e^{-2h\lambda} )^{\alpha}\).

Remark 2.1

In [24], Li and Deng combined the Crank–Nicolson method with a tempered weighted and shifted Grünwald–Letnikov operator to propose a numerical method with the accuracy of \(O(\tau^{2}+h^{2})\) for tempered fractional diffusion equation with a linear source term, where the tempered weighted and shifted Grünwald–Letnikov operators with second-order accuracy are defined as

$$\begin{aligned}& \widehat{\delta}^{\alpha}_{x-}u(x_{i},t_{n})= \frac{1}{h^{\alpha}}\sum^{i+1}_{k=0} \widehat{g}^{(\alpha,\lambda)}_{k} u(x_{i-k+1},t_{n})- \frac{1}{h^{\alpha}} \bigl( \gamma_{1}e^{h\lambda }+ \gamma_{2}+\gamma_{3}e^{-h\lambda} \bigr) \bigl(1-e^{-h\lambda } \bigr)^{\alpha}u(x_{i},t_{n}), \\& \widehat{\delta}^{\alpha}_{x+}u(x_{i},t_{n})= \frac{1}{h^{\alpha}}\sum^{M_{1}-i+1}_{k=0} \widehat{g}^{(\alpha,\lambda)}_{k} u(x_{i+k-1},t_{n})- \frac{1}{h^{\alpha}} \bigl( \gamma_{1}e^{h\lambda }+ \gamma_{2}+\gamma_{3}e^{-h\lambda} \bigr) \bigl(1-e^{-h\lambda } \bigr)^{\alpha}u(x_{i},t_{n}), \end{aligned}$$

where the weights \(\widehat{g}^{(\alpha,\lambda)}_{k}\) are given as

$$ \widehat{g}^{(\alpha,\lambda)}_{k}= \textstyle\begin{cases} \gamma_{1} w_{0}^{(\alpha)}e^{h\lambda},\quad& k=0,\\ \gamma_{1} w_{1}^{(\alpha)}+\gamma_{2} w_{0}^{(\alpha)},\quad& k=1,\\ ( \gamma_{1} w_{k}^{(\alpha)}+\gamma_{2} w_{k-1}^{(\alpha)}+\gamma _{3} w_{k-2}^{(\alpha)} )e^{-(k-1)h\lambda}, & k\geq2, \end{cases} $$

the weights \(w_{0}^{(\alpha)}=1, w_{k}^{(\alpha)}=(1-\frac{1+\alpha }{k})w_{k-1}^{(\alpha)}\), \(k\geq1\). The values of \(\gamma_{1}\), \(\gamma _{2}\) and \(\gamma_{3}\) can be selected in the following three sets:

  1. (1)

    \(S^{\alpha}_{1}( \gamma_{1},\gamma_{2},\gamma_{3} )= \{ \max \{ \frac{2(\alpha^{2}+3\alpha-4)}{\alpha^{2}+3\alpha+2},\frac {\alpha^{2}+3\alpha}{\alpha^{2}+3\alpha+4} \}\leq\gamma _{1}\leq\frac{3(\alpha^{2}+3\alpha-2)}{2(\alpha^{2}+3\alpha +2)}, \gamma_{2}=\frac{2+\alpha}{2}-2\gamma_{1}, \gamma_{3}=\gamma _{1}-\frac{\alpha}{2} \}\).

  2. (2)

    \(S^{\alpha}_{2}( \gamma_{1},\gamma_{2},\gamma_{3} )= \{ \gamma _{1}=\frac{2+\alpha}{4}-\frac{\gamma_{2}}{2}, \frac{(\alpha -4)(\alpha^{2}+3\alpha+2)}{2(\alpha^{2}+3\alpha+2)}\leq\gamma _{2}\leq\min \{ \frac{(\alpha-2)(\alpha^{2}+3\alpha+4)+16}{2(\alpha^{2}+3\alpha +4)}, \frac{(\alpha-6)(\alpha^{2}+3\alpha+2)+48}{2(\alpha^{2}+3\alpha +2)} \}, \gamma_{3}=\frac{2-\alpha}{4}-\frac{\gamma_{2}}{2} \}\).

  3. (3)

    \(S^{\alpha}_{3}( \gamma_{1},\gamma_{2},\gamma_{3} )= \{ \gamma _{1}=\frac{\alpha}{2}+\gamma_{3}, \gamma_{2}=\frac{2-\alpha }{2}-2\gamma_{3}, \max \{ \frac{(2-\alpha)(\alpha^{2}+\alpha -8)}{\alpha^{2}+3\alpha+2},\frac{(1-\alpha)(\alpha^{2}+2\alpha )}{2(\alpha^{2}+3\alpha+4)} \} \leq\gamma_{3}\leq\frac{(2-\alpha)(\alpha^{2}+2\alpha -3)}{2(\alpha^{2}+3\alpha+2)} \}\).

3 Stability and convergence analysis

In order to analyze the stability and convergence of the numerical method, we would introduce some notations and lemmas.

Let

$$ \gamma_{h}= \bigl\{ \zeta^{n} | \zeta^{n}= \bigl( \zeta_{0}^{n},\zeta _{1}^{n}, \ldots, \zeta_{M}^{n} \bigr) \bigr\} , \qquad\widehat{ \gamma}_{h}= \bigl\{ \zeta ^{n} | \zeta^{n} \in \gamma_{h} , \zeta^{n}_{0}=\zeta^{n}_{M}=0 \bigr\} , $$

for any \(u^{n},v^{n}\in\widehat{\gamma}_{h}\), we define the following discrete inner product and corresponding norm:

$$ \bigl(u^{n},v^{n} \bigr)=h\sum ^{M-1}_{i=1}u_{i}^{n}v_{i}^{n}, \qquad \bigl\lVert u^{n} \bigr\rVert =\sqrt{ \bigl(u^{n},u^{n} \bigr)}. $$

Remark 3.1

From [1], we note that the matrix A is negative definite, i.e. for any \(\chi\in\mathbb {R}^{M-1}\), \(\chi A \chi^{T} \leq0\), therefore, we have Lemma 3.1.

Lemma 3.1

For any \(u^{n} \in\widehat{\gamma}_{h}\), we have

$$ \bigl(\delta^{\alpha}_{x}u^{n},u^{n} \bigr)\leq0. $$

Assuming \(\widehat{u}^{n}_{i}\) is the numerical solution for (2.6)–(2.8) starting from another initial value \(\widehat{\varphi}(x)\), denote \(\eta^{n}=(0,\eta^{n}_{1},\eta^{n}_{2},\ldots,\eta ^{n}_{M-1},0 )\), where \(\eta^{n}_{i}=u^{n}_{i}-\widehat{u}^{n}_{i} \), then we have the following consequences.

Theorem 3.1

For any given positive number \(\mu\in(0,1)\), if \(0<\tau\leq\tau _{0}=\frac{1-\mu}{L}\), then the numerical scheme (2.6)(2.8) is stable, i.e. there exists a constant \(c_{2}\), such that

$$\max_{1\leq n\leq N} \bigl\lVert \eta^{n} \bigr\rVert \leq c_{2} \bigl\lVert \eta^{0} \bigr\rVert . $$

Proof

According to (2.6), we obtain the following equation:

$$ \eta_{i}^{n+1}=\eta_{i}^{n}+ \tau\delta^{\alpha}_{x} \eta^{n+\frac{1}{2}}_{i}+\tau \bigl(g^{n+\frac{1}{2}}_{i}-\widehat{ g }^{n+\frac {1}{2}}_{i} \bigr),\quad 1\leq i \leq M-1,0\leq n \leq N-1,$$
(3.1)

where \(\widehat{ g }^{n+\frac{1}{2}}_{i}=g(x_{i},t_{n+\frac {1}{2}},\widehat{u}^{n+\frac{1}{2}}_{i} )\).

Multiplying by \(h\eta^{n+\frac{1}{2}}\) in (3.1), summing up from 1 to \(M-1\) on i, we have

$$ \begin{aligned}[b] h\sum^{M-1}_{i=1} \eta^{n+1}_{i}\eta^{n+\frac{1}{2}}_{i}&=h\sum ^{M-1}_{i=1}\eta^{n}_{i} \eta^{n+\frac{1}{2}}_{i}+\tau h\sum^{M-1}_{i=1} \bigl(\delta^{\alpha}_{x}\eta^{n+\frac{1}{2}}_{i} \bigr) \eta^{n+\frac {1}{2}}_{i} \\ &\quad {}+\tau h\sum^{M-1}_{i=1} \bigl(g^{n+\frac {1}{2}}_{i} - \widehat{g}^{n+\frac{1}{2}}_{i} \bigr)\eta^{n+\frac{1}{2}}_{i}. \end{aligned} $$
(3.2)

Noticing that

$$ \begin{aligned}[b] h\sum^{M-1}_{i=1} \bigl(\eta^{n+1}_{i}\eta^{n+\frac{1}{2}}_{i}- \eta^{n}_{i}\eta^{n+\frac{1}{2}}_{i} \bigr)&=h\sum ^{M-1}_{i=1} \bigl( \eta ^{n+1}_{i}- \eta^{n}_{i} \bigr)\frac{( \eta^{n+1}_{i}+ \eta^{n}_{i})}{2} \\ &=\frac{h}{2}\sum^{M-1}_{i=1} \bigl( \eta^{n+1}_{i} \bigr)^{2}- \frac {h}{2}\sum ^{M-1}_{i=1} \bigl(\eta^{n}_{i} \bigr)^{2} \\ &=\frac{1}{2} \bigl\lVert \eta^{n+1} \bigr\rVert ^{2}- \frac{1}{2} \bigl\lVert \eta ^{n} \bigr\rVert ^{2}. \end{aligned} $$
(3.3)

Employing Lemma 3.1, we find

$$ \tau h\sum^{M-1}_{i=1} \bigl(\delta^{\alpha}_{x}\eta^{n+\frac{1}{2}}_{i} \bigr) \eta ^{n+\frac{1}{2}}_{i}=\tau \bigl(\delta^{\alpha}_{x} \eta^{n+\frac{1}{2}},\eta ^{n+\frac{1}{2}} \bigr) \leq0. $$
(3.4)

Substituting (3.3) and (3.4) into (3.2), we obtain

$$ \begin{aligned}[b] \frac{1}{2} \bigl\lVert \eta^{n+1} \bigr\rVert ^{2} &\leq\frac{1}{2} \bigl\lVert \eta ^{n} \bigr\rVert ^{2} + \Biggl\vert \tau h\sum ^{M-1}_{i=1} \bigl(g^{n+\frac{1}{2}}_{i} - \widehat {g}^{n+\frac{1}{2}}_{i} \bigr)\eta^{n+\frac{1}{2}}_{i} \Biggr\vert \\ &\leq \frac{1}{2} \bigl\lVert \eta^{n} \bigr\rVert ^{2}+\tau h \sum^{M-1}_{i=1} \bigl\vert \bigl(g^{n+\frac{1}{2}}_{i} - \widehat{g}^{n+\frac{1}{2}}_{i} \bigr) \bigr\vert \bigl\vert \eta^{n+\frac{1}{2}}_{i} \bigr\vert . \end{aligned} $$

Since g satisfies Lipschitz condition with respect to u, we have

$$ \begin{aligned}[b] \frac{1}{2} \bigl\lVert \eta^{n+1} \bigr\rVert ^{2}&\leq \frac{1}{2} \bigl\lVert \eta^{n} \bigr\rVert ^{2}+\tau hL \sum ^{M-1}_{i=1} \bigl\vert \eta^{n+\frac{1}{2}}_{i} \bigr\vert ^{2} \\ &\leq \frac{1}{2} \bigl\lVert \eta^{n} \bigr\rVert ^{2}+\frac{\tau L}{2} \bigl\rVert \eta ^{n+1} \bigr\rVert ^{2}+ \frac{\tau L}{2} \bigl\rVert \eta^{n} \bigr\rVert ^{2}. \end{aligned} $$
(3.5)

It follows from (3.5) that

$$ \begin{aligned}[b] \bigl\lVert \eta^{n+1} \bigr\rVert ^{2} &\leq \bigl\lVert \eta^{n} \bigr\rVert ^{2}+\tau L \bigl\rVert \eta^{n+1} \bigr\rVert ^{2}+ \tau L \bigl\rVert \eta^{n} \bigr\rVert ^{2} \\ &\leq \bigl\lVert \eta^{0} \bigr\rVert ^{2}+\tau L \sum ^{n}_{k=0} \bigl( \bigl\rVert \eta^{k+1} \bigr\rVert ^{2}+ \bigl\rVert \eta^{k} \bigr\rVert ^{2} \bigr) \\ &= ( 1-\tau L ) \bigl\lVert \eta^{0} \bigr\rVert ^{2}+ 2\tau L\sum^{n}_{k=0} \bigl\rVert \eta^{k} \bigr\rVert ^{2} +\tau L \bigl\rVert \eta^{n+1} \bigr\rVert ^{2}. \end{aligned} $$
(3.6)

We can obtain the recursion from (3.6),

$$ ( 1-\tau L ) \bigl\lVert \eta^{n+1} \bigr\rVert ^{2} \leq ( 1-\tau L ) \bigl\lVert \eta^{0} \bigr\rVert ^{2}+ 2\tau L\sum^{n}_{k=0} \bigl\rVert \eta^{k} \bigr\rVert ^{2}.$$
(3.7)

For any given \(\mu\in(0,1)\), and \(0< \tau\leq\tau_{0}=\frac{1-\mu }{L} \), then we obtain from (3.7)

$$ \begin{aligned}[b] \bigl\lVert \eta^{n+1} \bigr\rVert ^{2} &\leq \bigl\lVert \eta^{0} \bigr\rVert ^{2}+ \frac{2\tau L}{1-\tau L }\sum^{n}_{k=0} \bigl\rVert \eta^{k} \bigr\rVert ^{2} \\ &\leq \bigl\lVert \eta^{0} \bigr\rVert ^{2}+ \frac{2\tau L}{\mu}\sum^{n}_{k=0}\bigl\rVert \eta^{k} \bigr\rVert ^{2}. \end{aligned} $$
(3.8)

It follows from (3.8) and the discrete Gronwall inequality that

$$ \begin{aligned}[b] \bigl\lVert \eta^{n+1} \bigr\rVert ^{2}&\leq e^{\frac{2(n+1)\tau L}{\mu}} \bigl\lVert \eta^{0} \bigr\rVert ^{2} \\ &\leq e^{\frac{2T L}{\mu}} \bigl\lVert \eta^{0} \bigr\rVert ^{2}. \end{aligned} $$

Therefore

$$ \max_{1\leq n\leq N} \bigl\lVert \eta^{n} \bigr\rVert \leq c_{2} \bigl\lVert \eta^{0} \bigr\rVert , $$

where \(c_{2}=\sqrt{e^{\frac{2T L}{\mu}}}\). The proof is completed. □

Theorem 3.2

For any given positive number \(\mu\in(0,1)\), if \(0<\tau\leq\tau _{0}=\frac{2-2\mu}{2L+1}\), then the numerical scheme (2.6)(2.8) is convergent, i.e. there exists a constant \(c_{3}\), such that

$$\max_{1\leq n\leq N} \bigl\lVert \varepsilon^{n} \bigr\rVert \leq c_{3} \bigl(\tau^{2}+h^{2} \bigr), $$

where \(\varepsilon^{n}=( 0,\varepsilon^{n}_{1},\varepsilon ^{n}_{2},\ldots,\varepsilon^{n}_{M-1},0 )\), \(\varepsilon ^{n}_{i}=u(x_{i},t_{n})-u^{n}_{i}\).

Proof

Subtracting (2.6) from (2.4), we get the error equation

$$ \varepsilon_{i}^{n+1}=\varepsilon_{i}^{n}+ \tau\delta^{\alpha}_{x} \varepsilon^{n+\frac{1}{2}}_{i}+ \tau \bigl( \widetilde{ g }^{n+\frac {1}{2}}_{i}-g^{n+\frac{1}{2}}_{i} \bigr)+\tau\mathscr{R}_{i}^{n+\frac{1}{2}},\quad 1\leq i \leq M-1,0\leq n \leq N-1,$$
(3.9)

where \(\widetilde{ g }^{n+\frac{1}{2}}_{i}=g(x_{i},t_{n+\frac {1}{2}},\frac{u(x_{i},t_{n+1})+u(x_{i},t_{n})}{2} )\).

Similarly, we can conclude from the deduction of Theorem 3.1 that

$$ \frac{1}{2} \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} \leq\frac{1}{2} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2} + \Biggl\vert \tau h\sum ^{M-1}_{i=1} \bigl(\widetilde{g}^{n+\frac {1}{2}}_{i}-g^{n+\frac{1}{2}}_{i} \bigr)\varepsilon^{n+\frac{1}{2}}_{i} \Biggr\vert + \Biggl\vert \tau h\sum^{M-1}_{i=1}\mathscr{R}_{i}^{n+\frac{1}{2}} \varepsilon^{n+\frac{1}{2}}_{i} \Biggr\vert . $$
(3.10)

According to (1.4) and Cauchy–Schwarz inequality, we obtain from (3.10)

$$ \begin{aligned}[b] \frac{1}{2} \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2}&\leq \frac{1}{2} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2}+\tau h \sum ^{M-1}_{i=1} \bigl\vert \bigl( \widetilde{g}^{n+\frac{1}{2}}_{i}-g^{n+\frac {1}{2}}_{i} \bigr) \bigr\vert \bigl\vert \varepsilon^{n+\frac{1}{2}}_{i} \bigr\vert \\ &\quad {}+\frac{\tau h}{2}\sum^{M-1}_{k=i} \bigl\vert \mathscr{R}^{n+\frac{1}{2}}_{i} \bigr\vert ^{2}+\frac{\tau h}{2}\sum^{M-1}_{k=i} \bigl\vert \varepsilon^{n+\frac{1}{2}}_{i} \bigr\vert ^{2} \\ &\leq \frac{1}{2} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2}+\tau hL \sum^{M-1}_{i=1} \bigl\vert \varepsilon^{n+\frac{1}{2}}_{i} \bigr\vert ^{2}+\frac {\tau}{2} \bigl\lVert \mathscr{R}^{n+\frac{1}{2}} \bigr\rVert ^{2}+\frac{\tau }{2} \bigl\lVert \varepsilon^{n+\frac{1}{2}} \bigr\rVert ^{2} \\ &\leq \frac{1}{2} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2}+\frac{\tau L}{2} \bigl\rVert \varepsilon^{n+1} \bigr\rVert ^{2}+ \frac{\tau L}{2} \bigl\rVert \varepsilon ^{n} \bigr\rVert ^{2}+\frac{\tau}{2} \bigl\lVert \mathscr{R}^{n+\frac{1}{2}} \bigr\rVert ^{2} \\ &\quad {}+\frac{\tau}{4} \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2}+\frac {\tau}{4} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2}. \end{aligned} $$
(3.11)

It follows from (3.11) that

$$\begin{aligned} \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} &\leq \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2}+\tau L\bigl\rVert \varepsilon^{n+1} \bigr\rVert ^{2}+ \tau L \bigl\rVert \varepsilon^{n} \bigr\rVert ^{2}+\tau \bigl\lVert \mathscr{R}^{n+\frac{1}{2}} \bigr\rVert ^{2}+\frac{\tau}{2} \bigl\lVert \varepsilon ^{n+1} \bigr\rVert ^{2}+\frac{\tau}{2} \bigl\lVert \varepsilon^{n} \bigr\rVert ^{2} \\ &\leq \bigl\lVert \varepsilon^{0} \bigr\rVert ^{2}+\tau L \sum^{n}_{k=0} \bigl\rVert \varepsilon^{k+1} \bigr\rVert ^{2}+ \tau L\sum^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\tau\sum ^{n}_{k=0} \bigl\lVert \mathscr {R}^{k+\frac{1}{2}} \bigr\rVert ^{2} \\ &\quad{}+\frac{\tau}{2}\sum^{n}_{k=0} \bigl\lVert \varepsilon^{k+1} \bigr\rVert ^{2}+ \frac{\tau}{2}\sum^{n}_{k=0} \bigl\lVert \varepsilon^{k} \bigr\rVert ^{2} \\ &\leq 2\tau L\sum^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\tau\sum ^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\tau L \bigl\rVert \varepsilon^{n+1} \bigr\rVert ^{2} \\ &\quad{}+\frac{\tau}{2} \bigl\rVert \varepsilon^{n+1} \bigr\rVert ^{2} +\tau \sum^{n}_{k=0} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2}. \end{aligned}$$
(3.12)

We can obtain the recursion from (3.12),

$$ \biggl( 1-\tau L -\frac{\tau}{2} \biggr) \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} \leq ( 2\tau L+\tau ) \sum^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\tau\sum^{n}_{k=0} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2}. $$
(3.13)

For any given positive number \(\mu\in(0,1)\), if \(0<\tau\leq\tau _{0}=\frac{2-2\mu}{2L+1}\), then we can conclude from (3.13) that

$$ \begin{aligned}[b] \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} &\leq \frac{2\tau L+\tau}{1-\tau L -\frac{\tau}{2}} \sum ^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\frac{\tau}{1-\tau L -\frac{\tau }{2}}\sum^{n}_{k=0} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2} \\ &\leq \frac{(2 L+1)\tau}{\mu} \sum^{n}_{k=0} \bigl\rVert \varepsilon^{k} \bigr\rVert ^{2}+\frac{\tau}{\mu}\sum ^{n}_{k=0} \bigl\lVert \mathscr {R}^{k+\frac{1}{2}} \bigr\rVert ^{2}. \end{aligned} $$
(3.14)

In view of the discrete Gronwall inequality, we have

$$ \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} \leq \frac{\tau}{\mu}e^{\frac{(2 L+1)\tau(n+1)}{\mu}}\sum ^{n}_{k=0} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2}. $$
(3.15)

It follows from (2.5) and the definition of the discrete norm that

$$ \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2}=h\sum^{M-1}_{i=1} \bigl( \mathscr{R}^{k+\frac{1}{2}}_{i} \bigr)^{2} \leq Mh \max _{{1 \leq i \leq M-1}} \bigl\vert \mathscr{R}^{k+\frac{1}{2}}_{i} \bigr\vert ^{2} \leq {c_{1}}^{2}(b-a) \bigl( \tau^{2}+h^{2} \bigr)^{2}. $$
(3.16)

Therefore

$$ \tau\sum^{n}_{k=0} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2}\leq (n+1)\tau \max_{{1 \leq i \leq M-1}} \bigl\lVert \mathscr{R}^{k+\frac{1}{2}} \bigr\rVert ^{2} \leq{c_{1}}^{2}(b-a)T \bigl( \tau^{2}+h^{2} \bigr)^{2}. $$
(3.17)

Substituting (3.17) into (3.15), we get

$$ \bigl\lVert \varepsilon^{n+1} \bigr\rVert ^{2} \leq \frac{{c_{1}}^{2}(b-a)T}{\mu}e^{\frac{(2 L+1)T}{\mu}} \bigl( \tau ^{2}+h^{2} \bigr)^{2}. $$
(3.18)

Thus

$$\max_{1\leq n\leq N} \bigl\lVert \varepsilon^{n} \bigr\rVert \leq c_{3} \bigl(\tau^{2}+h^{2} \bigr), $$

where \(c_{3}=\sqrt{\frac{{c_{1}}^{2}(b-a)T}{\mu}e^{\frac{(2 L+1)T}{\mu}}}\). The proof is completed. □

4 Numerical experiments

Denote \(\lVert\varepsilon(h,\tau) \rVert=\sqrt{h\tau\sum^{N}_{n=1}\sum^{M-1}_{i=1}| u(x_{i},t_{n})-u^{n}_{i} |^{2}}\) as \(L_{2}\) norm of the error at the point \((x_{i},t_{n})\), where \(u(x_{i},t_{n})\) and \(u_{i}^{n}\) are the exact solution and numerical solution with the step sizes h and τ at the grid point \((x_{i},t_{n})\), respectively. The observation order is defined as

$$ \mbox{Rate}=\log_{2} \biggl( \frac{\lVert\varepsilon(2h,2\tau) \rVert }{\lVert\varepsilon(h,\tau) \rVert} \biggr). $$

Example 1

Consider the initial-boundary value problem in the following Riesz tempered fractional diffusion equation:

$$ \textstyle\begin{cases} \frac{\partial u(x,t)}{\partial t}=\frac{\partial^{\alpha,\lambda} u(x,t)}{\partial \vert x \vert ^{\alpha} }+g(x,t,u(x,t)),& x\in(0,1), t\in(0,1],\\ u(x,0)=x^{2}(1-x)^{2},& x\in[0,1],\\ u(0,t)=u(1,t)=0,& t\in[0,1], \end{cases} $$

where \(1< \alpha< 2\), the nonlinear source term is

$$ \begin{aligned}[b] g \bigl(x,t,u(x,t) \bigr)&= \bigl(u(x,t) \bigr)^{2}-x^{2}(1-x)^{2}e^{-t} \\ &\quad {}+\frac{e^{-t}}{2\cos(\frac{\pi\alpha }{2})} \Biggl[ e^{-\lambda x}\sum ^{\infty}_{k=0}\sum^{4}_{m=2} \frac{\lambda ^{k}\varGamma({k+m+1})A_{m}}{\varGamma(k+1)\varGamma(k+m+1-\alpha )}x^{k+m-\alpha} \\ &\quad {}+e^{\lambda x-\lambda}\sum^{\infty }_{k=0} \sum^{4}_{m=2}\frac{\lambda^{k}\varGamma({k+m+1})A_{m}}{\varGamma (k+1)\varGamma(k+m+1-\alpha)}(1-x)^{k+m-\alpha} \\ &\quad {}-2\lambda^{\alpha}x^{2}(1-x)^{2} \Biggr]-x^{4}(1-x)^{4}e^{-2t}, \end{aligned} $$

where \(A_{2}=1\), \(A_{3}=-2\), \(A_{4}=1\).

The exact solution of Example 1 is

$$ u(x,t)=x^{2}(1-x)^{2}e^{-t}. $$

From Table 1, we can observe the second-order accuracy in both spatial and temporal directions with different α and λ, which is in line with our convergence analysis. The numerical solutions of Example 1 are shown in Fig. 1, we can find from Fig. 2 that the global perturbation errors depend on the initial perturbation errors, which proved the correctness of our stability analysis.

Figure 1
figure 1

Numerical solutions for Example 1 with \(h=\tau =0.01\), \(\alpha=1.5\), \(\lambda=1\)

Figure 2
figure 2

Numerical solutions for the perturbation equations of Example 1 with \(h=\tau=0.01\), \(\alpha=1.5\), \(\lambda=1\) and the initial perturbation error \(\eta^{0}_{i}=1e-04\) (\(0\leq i \leq M \))

Table 1 Errors and corresponding observation orders of T-ML2 in spatial and temporal directions for Example 1 with different α and λ

Example 2

Consider the initial-boundary value problem in the following Riesz tempered fractional diffusion equation:

$$ \textstyle\begin{cases} \frac{\partial u(x,t)}{\partial t}=\frac{\partial^{\alpha,\lambda} u(x,t)}{\partial \vert x \vert ^{\alpha} }+g(x,t),& x\in(0,1), t\in (0,1],\\ u(x,0)=0,& x\in[0,1],\\ u(0,t)=u(1,t)=0,& t\in[0,1], \end{cases} $$

where \(1< \alpha< 2\), the linear source term \(g(x,t)\) is

$$\begin{aligned} g(x,t)&=(2+\alpha)t^{1+\alpha}e^{-\lambda x}x^{4}(1-x)^{4}+ \frac {t^{2+\alpha}}{2\cos(\frac{\pi\alpha}{2})} \Biggl[ e^{-\lambda x}\sum^{4}_{m=0}(-1)^{m} \tbinom{4}{m}\frac{\varGamma (5+m)}{\varGamma(5+m-\alpha)}x^{4+m-\alpha} \\ &\quad {}+e^{\lambda(x-2)}\sum^{\infty}_{j=0} \frac {(2\lambda)^{j}}{\varGamma(j+1)}\sum^{4}_{m=0}(-1)^{m} \tbinom {4}{m}\frac{\varGamma(5+m+j)}{\varGamma(5+m+j-\alpha)} (1-x)^{4+m+j-\alpha} \\ &\quad {}-2\lambda^{\alpha}e^{-\lambda x}x^{4}(1-x)^{4} \Biggr]. \end{aligned}$$

The exact solution of Example 2 is

$$u(x,t)=t^{2+\alpha}e^{-\lambda x}x^{4}(1-x)^{4}. $$

For contrast, we apply our numerical scheme (2.6)–(2.8) (T-ML2) and the method (T-WSGL) in [24] for solving Example 2 with different α and \(\lambda=1\), respectively. The errors and corresponding observation orders are listed in Table 2, we find that T-ML2 and T-WSGL are both effective for solving Example 2. However, we need to select the values of \(\gamma_{1}\), \(\gamma_{2}\) and \(\gamma_{3}\) for different α when T-WSGL is used. In a sense, T-ML2 may be more convenient than T-WSGL for Example 2.

Table 2 Errors and corresponding observation orders with \(\lambda=1\), and the parameters \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) are selected in \(S_{1}^{\alpha}(\gamma_{1},\gamma_{2},\gamma_{3})\)

5 Conclusion

In this paper, the implicit midpoint method is proposed for solving the Riesz tempered fractional diffusion equation with a nonlinear source term, the numerical scheme is proved to be stable and convergent by the energy method, and numerical examples verify the correctness of the theoretical analysis and the effectiveness of the proposed method.