1 Introduction

Nowadays, the study of time-fractional diffusion equations has drawn attention from various disciplines of science and engineering, such as mechanical engineering [1, 2], viscoelasticity [3], Lévy motion [4], electron transport [5], dissipation [6], heat conduction [711] and high-frequency financial data [12]. A number of experiments have shown that, in the process of modeling real physical phenomena such as Brownian motion [13], fractional calculus and derivatives provide more accurate simulations than traditional calculus with integer order derivatives. Fractional derivatives have also proved to be more flexible in describing viscoelastic behavior. In particular, fractional models are believed to be more realistic in describing anomalous diffusion in heterogeneous porous media.

In recent years, people gradually find that the fractional derivative in describing the memory and genetic of material has a natural advantage. The slow diffusion can be characterized by the long-tailed profile in the spatial distribution of densities as time passes and continuous-time random walk has been applied to the underground environmental problem. Thus fractional derivatives are applied to many science fields, especially in the analytical [1418] and numerical [1922]. However, in practical problems, we need to retrieve the part boundary data or source term of the equation by measuring the data. This leads to the inverse problem of the fractional diffusion equation. In this respect, some work has been published. In [10], the authors studied the inverse problem for restoration of the initial data of a solution, classical in time and with values in a space of periodic spatial distributions for a time-fractional diffusion equation and diffusion-wave equation. In [23], the authors considered the problem of identifying an unknown coefficient in a nonlinear diffusion equation. In [24], the authors considered the backward inverse problem for a time-fractional diffusion equation. In [25], Liu and Yamamoto used the quasi-reversibility method to solve a backward problem for a time-fractional diffusion equation in the one-dimensional case. In [26], Murio used the mollification technique to solve source terms identification for a time-fractional diffusion equation. In [27], Wang solved a backward problem for a time-fractional diffusion equation with variable coefficients in a general bounded domain by the Tikhonov regularization method. In [28], Zhang considered an inverse source problem for a fractional diffusion equation.

In this paper, we consider the following problem:

$$ \textstyle\begin{cases} D_{t}^{\alpha}u(x,t)-(Lu)(x,t)=f(x), & x\in\Omega,t\in(0,T),0< \alpha < 1, \\ u(x,t)=0, & x\in\partial\Omega,t\in(0,T), \\ u(x,0)=0, & x\in\overline{\Omega}, \\ u(x,T)=g(x), & x\in\overline{\Omega}, \end{cases} $$
(1.1)

where Ω is a bounded domain in \(\mathbb{R}^{d}\) with sufficient smooth boundary Ω and \(D_{t}^{\alpha}\) is the Caputo fractional derivative of order α defined by

$$ D_{t}^{\alpha}u(x,t)= \textstyle\begin{cases} \frac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{u_{\tau}(x,\tau)}{(t-\tau )^{\alpha}}\, d\tau, & 0< \alpha< 1, \\ u_{t}(x,t), & \alpha=1. \end{cases} $$
(1.2)

L is a symmetric uniformly elliptic operator and its expression is

$$ Lu(x)=\sum_{i=1}^{d} \frac{\partial}{\partial x_{i}}\Biggl(\sum_{j=1}^{d}a_{ij}(x) \frac{\partial}{\partial x_{j}}u(x)\Biggr)+c(x)u(x),\quad x\in \Omega, $$
(1.3)

where the coefficient

$$a_{ij}=a_{ji}\in C^{1}(\overline{\Omega}) \quad (1 \leq i,j\leq d) $$

and, for any given constant \(c>0\), we have

$$\begin{aligned}& c\sum_{i=1}^{d}\xi_{i}^{2} \leq\sum_{i,j=1}^{d}a_{ij}(x) \xi_{i}\xi _{j}, \quad x\in\overline{\Omega},\xi\in \mathbb{R}^{d}, \\& c(x)\leq0,\qquad c(x)\in c(\overline{\Omega}). \end{aligned}$$

Denote the eigenvalues of −L by \(\lambda_{n}\). We suppose \(\lambda_{n}\) (see [29]) satisfy

$$ 0< \lambda_{1}\leq\lambda_{2}\leq\cdots\leq \lambda_{n}\leq\cdots,\qquad \lim_{n\rightarrow+\infty} \lambda_{n}=+\infty $$
(1.4)

and the corresponding eigenfunction \(\varphi_{n}(x)\in H^{2}(\Omega )\cap H_{0}^{1}(\Omega)\).

The source function \(f(x)\) is unknown in problem (1.1). We use the additional condition \(u(x,T)=g(x)\) to identify the unknown source \(f(x)\). In practice, measurable data \(g(x)\) are never known exactly. We assume that the exact data \(g(x)\) and the measured data \(g^{\delta}(x)\) satisfy

$$ \bigl\Vert g-g^{\delta} \bigr\Vert \le\delta, $$
(1.5)

where \(\|\cdot\|\) is the \(L^{2}(\Omega)\) norm and \(\delta>0\) is a noise level.

If \(\alpha=1\), the equation of problem (1.1) is a standard heat conduction equation. There have been published a lot of research results (see [3035], etc.). In this paper, we only consider \(0<\alpha<1\) for identifying the unknown source of the time-fractional diffusion equation. In [36], Zhang used a truncation method to identify the unknown source for the time-fractional diffusion equation, and in [37], Wang simplified the Tikhonov regularization method to solve it, but they consider an inverse source problem for the time-fractional diffusion equation in a regular domain. In [38], the author used the quasi-reversibility method to solved problem (1.1). However, the error estimates from [37, 38] are not optimal order, which will lead to a saturating phenomenon.

In this article, the Landweber iterative method is used to deal with the ill-posedness problem (1.1) in a general region and convergence estimates are all obtained under a priori and a posteriori choice regularization parameter rules. Moreover, convergence estimates are all optimal order according to our method. The Landweber iteration method [39], proposed by Landweber, Friedman and Bialy, is a kind of iterative algorithm for solving the operator equation \(Kx=y\).

The structure of this paper is as follows. In Section 2, some basic lemmas and results are given. In Section 3, the Landweber iterative regularization method and regularization solution are given. In Section 4, the convergence estimates under the a priori and a posteriori regularization parameter choice rules are given. In Section 5, numerical implementation and numerical examples are given. In Section 6, some conclusions as regards this paper are given.

2 Lemma and results

Definition 2.1

([40])

The Mittag-Leffler function is defined by

$$ E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty} \frac{z^{k}}{\Gamma(\alpha k+\beta)}, \quad z\in\mathbb{C}, $$
(2.1)

where \(\alpha>0\) and \(\beta\in\mathbb{R}\) are arbitrary constants.

Lemma 2.2

([41])

For the Mittag-Leffler function, we have

$$ E_{\alpha,\beta}(z)=zE_{\alpha,\alpha+\beta}(z)+\frac{1}{\Gamma(\beta)}. $$
(2.2)

Lemma 2.3

([40])

Let \(\lambda>0\), that is,

$$ \int_{0}^{\infty}e^{-pt}t^{\gamma k+\beta-1}E_{\gamma,\beta}^{(k)} \bigl(\pm at^{\gamma}\bigr)\,dt=\frac{k!p^{\gamma-\beta}}{(p^{\gamma}\mp a)^{k+1}}, \quad \operatorname{Re}(p)>|a|^{\frac{1}{\gamma}}, $$
(2.3)

where \(E_{\gamma,\beta}^{(k)}(y):=\frac{d^{k}}{dy^{k}}E_{\gamma,\beta }(y)\).

Lemma 2.3 implies that the Laplace transform of \(t^{\gamma k+\beta -1}E_{\gamma,\beta}^{(k)}(\pm at^{\gamma})\) is \(\frac{k!p^{\gamma-\beta }}{(p^{\gamma}\mp a)^{k+1}}\).

Lemma 2.4

([42])

For \(0<\alpha<1\), \(\eta>0\), we have \(0\leq E_{\alpha ,1}(-\eta)\leq1\) and \(E_{\alpha,1}(-\eta)\) is a completely monotonic function, i.e.,

$$ (-1)^{n}\frac{d^{n}}{d\eta^{n}}E_{\alpha,1}(-\eta)\geq0, \quad \eta\geq0. $$
(2.4)

Lemma 2.5

Suppose \(\lambda_{n}\) are the eigenvalues of operatorL. If \(\lambda_{n} \geq\cdots\lambda_{1}\geq0\), then there exists a positive constant \(C_{1}\) which depends on α, T, \(\lambda_{1}\) such that

$$ \frac{C_{1}}{\lambda_{n}T^{\alpha}}\leq E_{\alpha,1+\alpha}\bigl(-\lambda _{n}T^{\alpha}\bigr)\leq\frac{1}{\lambda_{n}T^{\alpha}}, $$
(2.5)

where \(C_{1}(\alpha,T,\lambda_{1})=1-E_{\alpha,1}(-\lambda_{1}T^{\alpha})\).

Proof

From Lemma 2.2 and Lemma 2.4, we easily get

$$ E_{\alpha,1+\alpha}\bigl(-\lambda_{n}T^{\alpha} \bigr)=\frac{E_{\alpha,1}(-\lambda _{n}T^{\alpha})-1}{-\lambda_{n}T^{\alpha}} =\frac{1-E_{\alpha,1}(-\lambda_{n}T^{\alpha})}{\lambda_{n}T^{\alpha }}\leq\frac{1}{\lambda_{n}T^{\alpha}}. $$
(2.6)

From Lemma 2.4, we know \(E_{\alpha,1}(-\lambda_{n}T^{\alpha})\leq E_{\alpha,1}(-\lambda_{1}T^{\alpha})\) when \(\lambda_{n} \geq\lambda _{1}\), so

$$ E_{\alpha,1+\alpha}\bigl(-\lambda_{n}T^{\alpha} \bigr)=\frac{1-E_{\alpha ,1}(-\lambda_{n}T^{\alpha})}{\lambda_{n}T^{\alpha}} \geq\frac{1-E_{\alpha,1}(-\lambda_{1}T^{\alpha})}{\lambda_{n}T^{\alpha }}=\frac{C_{1}}{\lambda_{n}T^{\alpha}}, $$
(2.7)

where \(C_{1}(\alpha,T,\lambda_{1})=1-E_{\alpha,1}(-\lambda_{1}T^{\alpha})\). □

3 Regularization method

As in [29, 43], define

$$ D\bigl((-L)^{\gamma}\bigr)=\Biggl\{ \psi\in L^{2}( \Omega): \sum_{n=1}^{\infty}\lambda _{n}^{2\gamma} \bigl\vert (\psi,\varphi_{n}) \bigr\vert ^{2}< \infty\Biggr\} , $$
(3.1)

where \((\cdot,\cdot)\) is the inner product in \(L^{2}(\Omega)\) and \(D((-L)^{\gamma})\) is a Hilbert space with the norm

$$ \|\psi\|_{D((-L)^{\gamma})}=\Biggl(\sum_{n=1}^{\infty} \lambda_{n}^{2\gamma } \bigl\vert (\psi,\varphi_{n}) \bigr\vert ^{2}\Biggr)^{\frac{1}{2}}. $$
(3.2)

Now using separation of variables and Lemma 2.3, we get the solution of problem (1.1) as follows:

$$u(x,t)=\sum_{n=1}^{\infty}\bigl(f(x), \varphi_{n}(x)\bigr)t^{\alpha}E_{\alpha ,1+\alpha}\bigl(- \lambda_{n}t^{\alpha}\bigr)\varphi_{n}(x). $$

Denote \(f_{n}=(f(x),\varphi_{n}(x))\), \(g_{n}=(g(x),\varphi_{n}(x))\) and let \(t=T\). Then

$$ g(x)=u(x,T)=\sum_{n=1}^{\infty}f_{n}T^{\alpha}E_{\alpha,1+\alpha } \bigl(-\lambda_{n}T^{\alpha}\bigr)\varphi_{n}(x) $$
(3.3)

and

$$ g_{n}=f_{n}T^{\alpha}E_{\alpha,1+\alpha} \bigl(-\lambda_{n}T^{\alpha}\bigr). $$
(3.4)

Hence we obtain

$$ f_{n}=\frac{g_{n}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha})} $$
(3.5)

and

$$ f(x)=\sum_{n=1}^{\infty}f_{n} \varphi_{n}=\sum_{n=1}^{\infty} \frac {1}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha})}g_{n}\varphi _{n}(x). $$
(3.6)

Using Lemma 2.5, we have

$$ \bigl\vert T^{\alpha}E_{\alpha,1+\alpha}\bigl(- \lambda_{n}T^{\alpha}\bigr) \bigr\vert = \biggl\vert \frac{E_{\alpha,1}(-\lambda_{n}T^{\alpha})-1}{-\lambda_{n}} \biggr\vert \leq \frac{1}{\lambda_{n}}. $$
(3.7)

Consequently,

$$ \biggl\vert \frac{1}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha })} \biggr\vert \geq \lambda_{n}. $$
(3.8)

Small errors in the high-frequency components for \(g^{\delta}(x)\) will be amplified by \(\frac{1}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda _{n}T^{\alpha})}\), so problem (1.1) is ill-posed. We must use the regularization method to solve it. We first impose the a priori bound for the exact solution \(f(x)\) as follows:

$$ \bigl\Vert f(x) \bigr\Vert _{D((-L)^{\frac{p}{2}})}\leq E, \quad p>0, $$
(3.9)

where E is the positive constant.

A conditional stability estimate of the inverse source problem (1.1) is given below.

Theorem 3.1

([27])

If \(\|f(x)\|_{D((-L)^{\frac{p}{2}})}\leq E\), then

$$ \bigl\Vert f(x) \bigr\Vert \leq C_{2}E^{\frac{2}{p+2}} \bigl\Vert g(x) \bigr\Vert ^{\frac{p}{p+2}}, $$
(3.10)

where \(C_{2}:=C_{1}^{-\frac{p}{p+2}}\) is a constant.

To find \(f(x)\), we need to solve the following integral equation:

$$ (Kf) (x):= \int_{\Omega}k(x,\xi)f(\xi)\, d\xi=g(x). $$
(3.11)

For \(k(x,\xi)=k(\xi,x)\), K is a self-adjoint operator. From Theorem 2.4 of [29], if \(f\in L^{2}(\Omega)\), then \(g\in H^{2}(\Omega)\). Because \(H^{2}(\Omega)\) compacts embedding \(L^{2}(\Omega)\), we know \(K:L^{2}(\Omega)\rightarrow L^{2}(\Omega)\) is compact operator.

For \(\varphi_{n}(x)\) being an orthonormal basis in \(L^{2}(\Omega)\),

$$ \sigma_{n}=T^{\alpha}E_{\alpha,1+\alpha}\bigl(- \lambda_{n}T^{\alpha}\bigr),\quad n=1,2,\ldots $$
(3.12)

are singular values of K and \(\varphi_{n}\) is the corresponding eigenvector.

Now, we use the Landweber iterative method to obtain the regularization solution for (1.1). We rewrite the equation \(Kf=g\) in the form \(f=(I-aK^{*}K)f+aK^{*}g\) for some \(a>0\) and give the following iterative form:

$$ f^{0}(x):=0,\qquad f^{m}(x)= \bigl(I-aK^{*}K\bigr)f^{m-1}(x)+aK^{*}g(x), \quad m=1,2,3,\ldots, $$
(3.13)

where m is the iterative step number, which is also the selected regularization parameter. a is called the relaxation factor and satisfies \(0< a<\frac{1}{\|K\|^{2}}\). For K is a self-adjoint operator, we obtain

$$ f^{m,\delta}(x)=a\sum_{k=0}^{m-1} \bigl(I-aK^{2}\bigr)^{k}Kg^{\delta}(x). $$
(3.14)

Using (3.12), we get

$$ f^{m,\delta}(x)=R_{m}g^{\delta}(x)=\sum _{n=1}^{\infty}\frac {1-(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}(-\lambda_{n}T^{\alpha }))^{m}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha })}g_{n}^{\delta} \varphi_{n}(x), $$
(3.15)

where \(g_{n}^{\delta}=(g^{\delta}(x),\varphi_{n}(x))\).

Because \(\sigma_{n}=T^{\alpha}E_{\alpha,1+\alpha}(-\lambda _{n}T^{\alpha})\) are singular values of K and \(0< a<\frac{1}{\|K\| ^{2}}\), we can easily see \(0< aT^{2\alpha}E_{\alpha,1+\alpha }^{2}(-\lambda_{n}T^{\alpha})<1\).

4 Error estimate under two parameter choice rules

In this section, we will give error estimates under the a priori choice rule and the a posteriori choice rule.

  • An a priori choice rule

Theorem 4.1

Let \(f(x)\), given by (3.6), be the exact solution of problem (1.1). Let \(f^{m,\delta}(x)\) be the regularization solution. Let conditions (1.5) and (3.9) hold. If we choose regularization parameter \(m=[b]\), where

$$ b=\biggl(\frac{E}{\delta}\biggr)^{\frac{4}{p+2}}, $$
(4.1)

then we have the following error estimate:

$$ \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert \leq C_{3}E^{\frac{2}{p+2}}\delta ^{\frac{p}{p+2}}, $$
(4.2)

where \([b]\) denotes the largest integer less than or equal to b and \(C_{3}=\sqrt{a}+(\frac{p}{aC_{1}^{2}})^{\frac{p}{4}}\) is constant.

Proof

Using the triangle inequality, we have

$$\begin{aligned} \begin{aligned} \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert ={}& \Biggl\Vert \sum_{n=1}^{\infty }\frac{1-(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}(-\lambda_{n}T^{\alpha }))^{m}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha })}g_{n}^{\delta} \varphi_{n}(x) \\ &{}-\sum_{n=1}^{\infty}\frac{1}{T^{\alpha}E_{\alpha,1+\alpha }(-\lambda_{n}T^{\alpha})}g_{n} \varphi_{n}(x) \Biggr\Vert \\ \leq{}& \Biggl\Vert \sum_{n=1}^{\infty} \frac{1-(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{m}}{T^{\alpha}E_{\alpha ,1+\alpha}(-\lambda_{n}T^{\alpha})}g_{n}^{\delta}\varphi_{n}(x) \\ &{}-\sum_{n=1}^{\infty}\frac{1-(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{m}}{T^{\alpha}E_{\alpha ,1+\alpha}(-\lambda_{n}T^{\alpha})}g_{n} \varphi_{n}(x) \Biggr\Vert \\ &{}+ \Biggl\Vert \sum_{n=1}^{\infty} \frac{1-(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{m}}{T^{\alpha}E_{\alpha ,1+\alpha}(-\lambda_{n}T^{\alpha})}g_{n}\varphi_{n}(x) \\ &{}-\sum_{n=1}^{\infty}\frac{1}{T^{\alpha}E_{\alpha,1+\alpha }(-\lambda_{n}T^{\alpha})}g_{n} \varphi_{n}(x) \Biggr\Vert \\ ={}& \bigl\Vert f^{m,\delta}(\cdot)-f^{m}(\cdot) \bigr\Vert + \bigl\Vert f^{m}(\cdot )-f(\cdot) \bigr\Vert . \end{aligned} \end{aligned}$$

Using conditions (1.5), we get

$$\begin{aligned} \bigl\Vert f^{m,\delta}(\cdot)-f^{m}(\cdot) \bigr\Vert ^{2} &=\sum_{n=1}^{\infty} \frac{(1-(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{m})^{2}}{T^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha})}\bigl(g_{n}^{\delta}-g_{n} \bigr)^{2} \\ &\leq\sup_{n\geq1}\bigl(A(n)\bigr)^{2} \delta^{2}, \end{aligned}$$

where \(A(n):=\frac{1-(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}(-\lambda _{n}T^{\alpha}))^{m}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda _{n}T^{\alpha})}\).

Because \(0< x<1\), we have

$$ x\leq\sqrt{x} $$
(4.3)

and

$$ (1-x)^{h}\geq1-hx \quad (h>0). $$
(4.4)

Using (4.3) and (4.4), we obtain

$$ 1-\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}\bigl(- \lambda_{n}T^{\alpha}\bigr)\bigr)^{m} \leq \sqrt{am}T^{\alpha}E_{\alpha,1+\alpha}\bigl(-\lambda_{n}T^{\alpha} \bigr), $$
(4.5)

i.e.,

$$ A(n)\leq\sqrt{am}, $$
(4.6)

so

$$ \bigl\Vert f^{m,\delta}(\cdot)-f^{m}(\cdot) \bigr\Vert \leq\sqrt{am}\delta. $$
(4.7)

On the other hand, using (3.9), we get

$$\begin{aligned} \bigl\Vert f^{m}(\cdot)-f(\cdot) \bigr\Vert ^{2} &= \Biggl\Vert \sum_{n=1}^{\infty} \frac{[1-(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{m}]-1}{T^{\alpha}E_{\alpha ,1+\alpha}(-\lambda_{n}T^{\alpha})}g_{n}\varphi_{n}(x) \Biggr\Vert ^{2} \\ &=\sum_{n=1}^{\infty}\frac{(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha}))^{2m}}{T^{2\alpha}E_{\alpha ,1+\alpha}^{2}(-\lambda_{n}T^{\alpha})}g_{n}^{2} \\ &=\sum_{n=1}^{\infty}\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{2m}( \lambda_{n})^{-p}\bigl(f_{n}^{2}(\lambda _{n})^{p}\bigr) \\ &\leq\sup_{n\geq1}\bigl(B(n)\bigr)^{2}E^{2}, \end{aligned}$$

where \(B(n):=(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}(-\lambda _{n}T^{\alpha}))^{m}(\lambda_{n})^{-\frac{p}{2}}\).

Using Lemma 2.5, we have

$$ B(n)\leq\biggl(1-\frac{aC_{1}^{2}}{\lambda_{n}^{2}}\biggr)^{m}( \lambda_{n})^{-\frac{p}{2}}. $$
(4.8)

Let

$$ F(s):=\biggl(1-\frac{aC_{1}^{2}}{s^{2}}\biggr)^{m}s^{-\frac{p}{2}}, \quad s:=\lambda_{n}. $$
(4.9)

Let \(s_{0}\) satisfy \(F'(s_{0})=0\). Then we easily get

$$ s_{0}=\biggl(\frac{aC_{1}^{2}(4m+p)}{p}\biggr)^{\frac{1}{2}}, $$
(4.10)

so

$$\begin{aligned} F(s_{0}) & =\biggl(1-\frac{p}{4m+p}\biggr)^{m}\biggl( \frac{aC_{1}^{2}(4m+p)}{p}\biggr)^{-\frac {p}{4}} \\ & \leq\biggl(\frac{p}{(m+1)aC_{1}^{2}}\biggr)^{\frac{p}{4}}, \end{aligned}$$
(4.11)

i.e.,

$$ F(s)\leq\biggl(\frac{p}{aC_{1}^{2}}\biggr)^{\frac{p}{4}}(m+1)^{-\frac{p}{4}}. $$
(4.12)

Thus we obtain

$$ B(n)\leq\biggl(\frac{p}{aC_{1}^{2}}\biggr)^{\frac{p}{4}}(m+1)^{-\frac{p}{4}}. $$
(4.13)

Hence

$$ \bigl\Vert f^{m}(\cdot)-f(\cdot) \bigr\Vert \leq \biggl(\frac{p}{aC_{1}^{2}}\biggr)^{\frac {p}{4}}(m+1)^{-\frac{p}{4}}E. $$
(4.14)

Combining (4.7) and (4.14), we choose \(m=[b]\) and we get

$$ \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert \leq C_{3}E^{\frac{2}{p+2}}\delta^{\frac {p}{p+2}}, $$
(4.15)

where \(C_{3}:=\sqrt{a}+(\frac{p}{aC_{1}^{2}})^{\frac{p}{4}}\). The theorem is proved. □

  • An a posteriori selection rule

We construct regularization solution sequences \(f^{m,\delta}(x)\) by the Landweber iterative method. Let \(r>1\) be a fixed constant. Stop the algorithm at the first occurrence of \(m=m(\delta)\in\mathbb{N}_{0}\) with

$$ \bigl\Vert Kf^{m,\delta}(\cdot)-g^{\delta}(\cdot) \bigr\Vert \leq r\delta, $$
(4.16)

where \(\|g^{\delta}\|\geq r\delta\).

Lemma 4.2

Let \(\rho(m)=\|Kf^{m,\delta}(\cdot)-g^{\delta}(\cdot)\|\). Then we have the following conclusions:

  1. (a)

    \(\rho(m)\) is a continuous function;

  2. (b)

    \(\lim_{m\rightarrow0}\rho(m)=\|g^{\delta}\|\);

  3. (c)

    \(\lim_{m\rightarrow+\infty}\rho(m)=0\);

  4. (d)

    \(\rho(m)\) is a strictly decreasing function, for any \(m\in(0,+\infty)\).

Lemma 4.2 shows that there exists a unique solution for inequality (4.16).

Lemma 4.3

Let (4.16) hold, so the regularization parameter m satisfies

$$ m\leq\biggl(\frac{p+2}{2aC_{1}^{2}}\biggr) \biggl(\frac{E}{(r-1)\delta} \biggr)^{\frac{4}{p+2}}. $$
(4.17)

Proof

From (3.15), we show the representation

$$ R_{m}g=\sum_{n=1}^{\infty} \frac{1-(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2}(-\lambda_{n}T^{\alpha}))^{m}}{T^{\alpha}E_{\alpha,1+\alpha }(-\lambda_{n}T^{\alpha})}g_{n}\varphi_{n}(x) $$
(4.18)

for every \(g\in H^{2}(\Omega)\), so

$$ \Vert KR_{m}g-g \Vert ^{2}=\sum _{n=1}^{\infty}\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{2m} \bigl\vert (g,\varphi_{n}) \bigr\vert ^{2}. $$
(4.19)

Because \(|1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2}(-\lambda_{n}T^{\alpha })|<1\), we obtain \(\|KR_{m-1}-I\|\leq1\). Using (4.16), we obtain

$$\begin{aligned} \Vert KR_{m-1}g-g \Vert &\geq \bigl\Vert KR_{m-1}g^{\delta}-g^{\delta} \bigr\Vert - \bigl\Vert (KR_{m-1}-I) \bigl(g-g^{\delta}\bigr) \bigr\Vert \\ &\geq r\delta- \Vert KR_{m-1}-I \Vert \delta \\ &\geq(r-1)\delta. \end{aligned}$$

On the other hand, using (3.9), we obtain

$$\begin{aligned} \begin{aligned} \Vert KR_{m-1}g-g \Vert ={}& \Biggl\Vert \sum _{n=1}^{\infty}\bigl(1-\bigl(1-aT^{2\alpha}E_{\alpha ,1+\alpha}^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m-1} \bigr)g_{n}\varphi_{n}-\sum_{n=1}^{\infty}g_{n} \varphi_{n} \Biggr\Vert \\ ={}&\sum_{n=1}^{\infty}\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m-1} \bigl\vert (g,\varphi_{n}) \bigr\vert \\ ={}&\sum_{n=1}^{\infty}\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m-1} \\ &{}\cdot T^{\alpha}E_{\alpha,1+\alpha}\bigl(-\lambda_{n}T^{\alpha} \bigr) \bigl\vert (f,\varphi_{n})\lambda_{n}^{\frac{p}{2}} \bigr\vert \lambda_{n}^{-\frac{p}{2}} \\ \leq{}&\sum_{n=1}^{\infty}\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m-1}T^{\alpha}E_{\alpha,1+\alpha } \bigl(-\lambda_{n}T^{\alpha}\bigr) \lambda_{n}^{-\frac{p}{2}}E. \end{aligned} \end{aligned}$$

Let

$$ C(n):=\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha}^{2} \bigl(-\lambda_{n}T^{\alpha }\bigr)\bigr)^{m-1}T^{\alpha}E_{\alpha,1+\alpha} \bigl(-\lambda_{n}T^{\alpha}\bigr) (\lambda _{n})^{-\frac{p}{2}}, $$
(4.20)

so

$$ (r-1)\delta\leq C(n)E. $$
(4.21)

Using Lemma 2.5, we have

$$ C(n)\leq\biggl(1-a\frac{C_{1}^{2}}{\lambda_{n}^{2}}\biggr)^{m-1} \lambda_{n}^{-\frac {p}{2}-1}. $$
(4.22)

Let

$$ G(s)=\biggl(1-a\frac{C_{1}^{2}}{s^{2}}\biggr)^{m-1}s^{-\frac{p}{2}-1}, \quad s:=\lambda_{n}. $$
(4.23)

Suppose \(s_{\ast}\) satisfies \(G'(s_{\ast})=0\). Then we get

$$ s_{\ast}=\biggl(\frac{aC_{1}^{2}(4m+p-2)}{p+2}\biggr)^{\frac{1}{2}}, $$
(4.24)

so

$$\begin{aligned} G(s_{\ast})&=\biggl(1-\frac{p+2}{4m+p-2}\biggr)^{m-1}\biggl( \frac {aC_{1}^{2}(4m+p-2)}{p+2}\biggr)^{-\frac{p+2}{4}} \\ & \leq\biggl(\frac{p+2}{2maC_{1}^{2}}\biggr)^{\frac{p+2}{4}}. \end{aligned}$$
(4.25)

Using (4.21) and (4.25), we get

$$ (r-1)\delta\leq\biggl(\frac{p+2}{2aC_{1}^{2}}\biggr)^{\frac{p+2}{4}}m^{-\frac{p+2}{4}}E. $$
(4.26)

Thus

$$m\leq\biggl(\frac{p+2}{2aC_{1}^{2}}\biggr) \biggl(\frac{E}{(r-1)\delta} \biggr)^{\frac{4}{p+2}}. $$

 □

Theorem 4.4

Let \(f(x)\), given by (3.6), be the exact solution of problem (1.1). Let \(f^{m,\delta}(x)\) be the regularization solution. The conditions (1.5) and (3.10) hold and the regularization parameter is given by (4.16). Then we have the following error estimate:

$$ \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert \leq \bigl(C_{2}(r+1)^{\frac {p}{p+2}}+C_{4}\bigr)E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}, $$
(4.27)

where \(C_{4}=(\frac{p+2}{2C_{1}^{2}})^{\frac{1}{2}}(\frac {1}{r-1})^{\frac{2}{p+2}}\).

Proof

Using the triangle inequality, we obtain

$$ \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert \leq \bigl\Vert f^{m,\delta}(\cdot)-f^{m}(\cdot ) \bigr\Vert + \bigl\Vert f^{m}(\cdot)-f(\cdot) \bigr\Vert . $$
(4.28)

Using (4.7) and Lemma 4.3, we get

$$ \bigl\Vert f^{m,\delta}(\cdot)-f^{m}(\cdot) \bigr\Vert \leq\sqrt{am}\delta\leq C_{4}E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}, $$
(4.29)

where \(C_{4}:=(\frac{p+2}{2C_{1}^{2}})^{\frac{1}{2}}(\frac {1}{r-1})^{\frac{2}{p+2}}\).

For the second part of the right side of (4.28), we know

$$\begin{aligned} K\bigl(f^{m}(\cdot)-f(\cdot)\bigr) =&\sum _{n=1}^{\infty}-\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m}g_{n} \varphi_{n}(x) \\ =&\sum_{n=1}^{\infty}-\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m} \bigl(g_{n}-g_{n}^{\delta}\bigr)\varphi_{n}(x) \\ &{}+\sum_{n=1}^{\infty}-\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{m}g_{n}^{\delta} \varphi_{n}(x). \end{aligned}$$

Using (1.5) and (4.16), we have

$$ \bigl\Vert K\bigl(f^{m}(\cdot)-f(\cdot)\bigr) \bigr\Vert \leq(r+1)\delta. $$
(4.30)

We also have

$$\begin{aligned} \bigl\Vert f^{m}(\cdot)-f(\cdot) \bigr\Vert _{D((-L)^{\frac{p}{2}})} =& \Biggl(\sum_{n=1}^{\infty}-\bigl(1-aT^{2\alpha}E_{\alpha,1+\alpha }^{2} \bigl(-\lambda_{n}T^{\alpha}\bigr)\bigr)^{2m} \\ &{}\cdot\biggl(\frac{g_{n}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda _{n}T^{\alpha})}\biggr)^{2}(\lambda_{n})^{p} \Biggr)^{\frac{1}{2}} \\ \leq&\Biggl(\sum_{n=1}^{\infty}( \lambda_{n})^{p}\biggl(\frac {g_{n}}{T^{\alpha}E_{\alpha,1+\alpha}(-\lambda_{n}T^{\alpha })} \biggr)^{2}\Biggr)^{\frac{1}{2}} \\ \leq& E. \end{aligned}$$

Using Theorem 3.1, we have

$$ \bigl\| f^{m}(\cdot)-f(\cdot)\bigr\| \leq C_{2}(r+1)^{\frac{p}{p+2}}E^{\frac {2}{p+2}} \delta^{\frac{p}{p+2}}. $$
(4.31)

Therefore

$$ \bigl\Vert f^{m,\delta}(\cdot)-f(\cdot) \bigr\Vert \leq \bigl(C_{2}(r+1)^{\frac {p}{p+2}}+C_{4}\bigr)E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}. $$
(4.32)

 □

5 Numerical implementation and numerical examples

In this section, we will use several numerical examples to show effectiveness of the Landweber iterative method.

5.1 One-dimensional case

Since the exact solution of problem (1.1) is difficult to give, we get the data function \(g(x)\) by solving the following direct problem:

$$ \textstyle\begin{cases} D_{t}^{\alpha}u(x,t)-(Lu)(x,t)=f(x), & 0< t< T,0< x< 1, \\ u(x,0)=0, & 0\leq x\leq1, \\ u(0,t)=0, & 0\leq t\leq T, \\ u(1,t)=0, & 0\leq t\leq T. \end{cases} $$
(5.1)

When the source function \(f(x)\) is given, we use the finite difference method to obtain data function \(g(x)\).

The time and space step size of the grid are \(\Delta t=\frac{T}{N}\) and \(\Delta x=\frac{1}{M}\), respectively. \(t_{n}=n\Delta t\), \(n=0,1,2,\ldots,N\), indicates grid points on time interval \([0,T]\) and \(x_{i}=i\Delta x\), \(i=0,1,2,\ldots,M\), is the grid point of space interval \([0,1]\). The value of each grid point is denoted by \(u_{i}^{n}=u(x_{i}, t_{n})\).

The following time-fractional differential is given in [18, 19]:

$$ D_{t}^{\alpha}u(x_{i}, t_{n})\approx \frac{(\Delta t)^{-\alpha }}{\Gamma(2-\alpha)}\sum_{j=0}^{n-1}b_{j} \bigl(u_{i}^{n-j}-u_{i}^{n-j-1}\bigr), $$
(5.2)

where \(i=1,\ldots,M-1\), \(n=1,\ldots,N\) and \(b_{j}=(j+1)^{1-\alpha}-j^{1-\alpha}\).

The spatial derivative difference scheme is given as follows [27]:

$$ Lu(x_{i},t_{n})\approx\frac{1}{(\Delta x)^{2}} \bigl(a_{i+\frac {1}{2}}u_{i+1}^{n}-(a_{i+\frac{1}{2}}+a_{i-\frac {1}{2}})u_{i}^{n}+a_{i-\frac{1}{2}}u_{i-1}^{n} \bigr)+c(x_{i})u_{i}^{n}, $$
(5.3)

where \(a_{i+\frac{1}{2}}=a(x_{i+\frac{1}{2}})\), \(x_{i+\frac{1}{2}}=\frac {x_{i}+x_{i+1}}{2}\).

For the inverse problem, we need to obtain a matrix K such that \(Kf=u_{i}^{N}\). In order to obtain it, we use the same method as in [27], that is,

$$\begin{aligned} \begin{aligned} &K^{1}=A, \\ &K^{n}=A-h\sum_{j=0}^{n-2}(b_{j+1}-b_{j})K^{n-j+1}, \quad n=2,\ldots,N, \\ &K=K^{N}, \end{aligned} \end{aligned}$$

where \(h:=\frac{(\Delta t)^{-\alpha}}{\Gamma(2-\alpha)}\),

$$ A_{(M+1)\times(M+1)}=\left ( \begin{matrix} 0 & & \\ &\widehat{A}_{(M-1)\times(M-1)}^{-1} & \\ & & 0 \end{matrix} \right ) $$

and

$$ \widehat{A}_{(M-1)\times(M-1)}=\left ( \begin{matrix} d_{1} & -\frac{1}{(\Delta x)^{2}}a_{\frac{3}{2}} & & \\ -\frac{1}{(\Delta x)^{2}}a_{\frac{3}{2}}&d_{2} & -\frac {1}{(\Delta x)^{2}}a_{\frac{5}{2}} &\\ \ddots& \ddots& \ddots&\\ & & -\frac{1}{(\Delta x)^{2}}a_{M-\frac{3}{2}}&d_{M-1} \end{matrix} \right ), $$

where \(d_{i}=\frac{1}{(\Delta x)^{2}}(a_{i+\frac {1}{2}}+a_{i-\frac{1}{2}})-c(x_{i})+h\), \(i=1,\ldots, M-1\). Then we obtain the regularization solution by

$$ f=a\sum_{k=0}^{m-1}\bigl(I-aK^{T}K \bigr)^{k}K^{T}g^{\delta}. $$
(5.4)

Noise data are generated by adding random perturbation, i.e.,

$$g^{\delta}=g+\varepsilon\bigl(\operatorname{rand}\bigl( \operatorname{size}(g)\bigr)\bigr). $$

The relative error level and absolute error level are computed by

$$ e_{r}=\frac{\sqrt{\sum(f-f^{\mu,\delta})^{2}}}{\sqrt{\sum(f)^{2}}} \quad \text{and}\quad e_{a}= \sqrt{\frac{1}{(M+1)}\sum\bigl(f-f^{\mu,\delta}\bigr)^{2}}. $$
(5.5)

5.2 Two-dimensional case

Let \(\Omega=(0,l_{1})\times(0,l_{2})\) be a rectangle domain. Consider the following time-fractional diffusion equation:

$$ \textstyle\begin{cases} \partial_{t}^{\alpha}u=u_{xx}+u_{yy}+f(x,y), &(x,y)\in\Omega, t\in (0,T), \\ u(x,y,0)=0, &(x,y)\in\Omega, \\ u(0,y,t)=u(l_{1},y,t)=u(x,0,t)=u(x,l_{2},t)=0, & t\in[0,T]. \end{cases} $$
(5.6)

Let \(x_{i}=i \Delta x\), \(i=0,1,\ldots,M_{1}\); \(y_{j}=j \Delta y\), \(j=0,1,\ldots,M_{2}\); \(t_{n}=n\Delta t\), \(n=0,1,\ldots,N\), where \(\Delta x=\frac{l_{1}}{M_{1}}\), \(\Delta y=\frac{l_{2}}{M_{2}}\) and \(\Delta t=\frac{T}{N}\) are space and time steps, respectively. The approximate values of each grid point u are denoted by \(u_{i,j}^{n}\approx u(x_{i},y_{j},t_{n})\). Thus, we use initial and boundary conditions of equation (5.6) to get \(u_{i,j}^{0}=0\), \(u_{0,j}^{n}=u_{M_{1},j}^{n}=0\), \(u_{i,0}^{n}=u_{i,M_{2}}^{n}=0\).

Let the integer order derivative difference scheme be given as follows:

$$\begin{aligned}& \frac{\partial^{2}u(x_{i},y_{j},t_{n+1})}{\partial x^{2}}\approx\frac {u_{i+1,j}^{n+1}-2u_{i,j}^{n+1}+u_{i-1,j}^{n+1}}{(\Delta x)^{2}}, \\& \frac{\partial^{2}u(x_{i},y_{j},t_{n+1})}{\partial y^{2}}\approx\frac {u_{i,j+1}^{n+1}-2u_{i,j}^{n+1}+u_{i,j-1}^{n+1}}{(\Delta y)^{2}}. \end{aligned}$$

It is easy to obtain the numerical solution of \(u(x,y,T)=g(x,y)\) by the scheme

$$ h \sum_{k=0}^{n-1}b_{k} \bigl(u_{i,j}^{n-k}-u_{i}^{n-k-1} \bigr)=p_{x}\bigl(u_{i+1,j}^{n}-2u_{i,j}^{n}+u_{i-1,j}^{n} \bigr) +p_{y}\bigl(u_{i,j+1}^{n}-2u_{i,j}^{n}+u_{i,j-1}^{n} \bigr)+f_{i,j}, $$
(5.7)

where \(p_{x}=\frac{1}{(\Delta x)^{2}}\), \(p_{y}=\frac{1}{(\Delta y)^{2}}\) and h and \(b_{k}\) are defined in the one-dimensional case.

Denote \(U^{n}=(u_{1,1}^{n},\ldots,u_{M_{1}-1,1}^{n},u_{1,2}^{n},\ldots ,u_{M_{1}-1,2}^{n}, \ldots,u_{1,M_{2}-1}^{n},\ldots,u_{M_{1}-1,M_{2}-1}^{n})\) and \(f=(f_{1,1},\ldots, f_{M_{1}-1,1},f_{1,2},\ldots,f_{M_{1}-1,2},\ldots ,f_{1,M_{2}-1},\ldots,f_{M_{1}-1,M_{2}-1})\). Then we obtain the following iterative scheme:

$$ \begin{aligned} &A^{*}U^{1} =f, \\ &A^{*}U^{n} =f+h\bigl(\omega_{1}U^{n-1}+ \omega_{2}U^{n-2}+\cdots+\omega _{n-1}U^{1} \bigr) \quad (n=2,3,\ldots,N), \end{aligned} $$
(5.8)

where \(\omega_{i}=b_{i-1}-b_{i}\) and

$$\begin{aligned}& A^{*}= \left ( \begin{matrix} A_{1,1}^{*} & -p_{y}I & & \\ -p_{y}I & A_{2,2}^{*} & -p_{y}I & \\ & -p_{y}I&\ddots& -p_{y}I \\ & & -p_{y}I & A_{M_{1}-1,M_{1}-1}^{*} \end{matrix} \right )\in\mathbb{R}^{(M_{1}-1)( M_{2}-1)\times(M_{1}-1)( M_{2}-1)}, \\& A^{*}_{i,i}=\left ( \begin{matrix} h+2p_{x}+2p_{y} & -p_{x} & & \\ -p_{x} & h+2p_{x}+2p_{y} & -p_{x} & \\ & -p_{x} & \ddots& -p_{x} \\ & & -p_{x} & h+2p_{x}+2p_{y} \end{matrix} \right )\in\mathbb{R}^{(M_{1}-1)^{2}}, \end{aligned}$$

where I is the unit matrix with order \((M_{1}-1)\times(M_{1}-1)\).

For the inverse problem, we can obtain a matrix \(\mathcal{K}\) such that \(\mathcal{K}f=u_{i,j}^{N}\) by

$$\begin{aligned}& \mathcal{K}^{1}=\bigl(A^{*}\bigr)^{-1}, \\& \mathcal{K}^{n}=\mathcal{K}^{1}+h\mathcal{K}^{1} \sum_{i=1}^{n-1}\omega_{i} \mathcal{K}^{n-i}, \quad n=2,\ldots,N, \\& \mathcal{K}=\mathcal{K}^{N}. \end{aligned}$$

We take \(g^{\delta}\) as noise data by adding a random perturbation, i.e.,

$$ g^{\delta}(\cdot,\cdot) =g(\cdot,\cdot)+\varepsilon\cdot \operatorname{rand}\bigl(\operatorname{size}(g)\bigr). $$

Then we obtain the regularization solution in the two-dimensional case by

$$ f=a\sum_{k=0}^{m-1}\bigl(I-a \mathcal{K}^{T}\mathcal{K}\bigr)^{k}\mathcal {K}^{T}g(\cdot,\cdot)^{\delta}. $$
(5.9)

For practical problems, the a priori bound is very difficult to obtain. We only give numerical effectiveness under the a posteriori regularization parameter choice rule. The iterative steps are given to solve (4.16) with \(r=1.1\).

In the one-dimensional computational procedure, we choose \(T=1\). Let \(\Omega=(0,1)\), \(a(x)=x^{2}+1\) and \(c(x)=-(x+1)\). We use the algorithm in [44] to compute the Mittag-Leffler function. In discrete format, we compute the direct problem with \(M=100\), \(N=200\) and we choose \(M=50\), \(N=100\) for solving the inverse problem.

Example 1

Take the smooth function \(f(x)=x^{\alpha }(1-x)^{\alpha}\sin(2\pi x)\).

Example 2

We take the piecewise smooth function

$$ f(x)= \textstyle\begin{cases} 0, & 0\leq x\leq\frac{1}{4}, \\ 4(x-\frac{1}{4}), &\frac{1}{4}< x\leq\frac{1}{2}, \\ -4(x-\frac{3}{4}), &\frac{1}{2}< x\leq\frac{3}{4}, \\ 0, &\frac{3}{4}< x\leq1. \end{cases} $$
(5.10)

Figure 1 shows the comparisons between the exact solution and its regularized solution for various noise levels \(\varepsilon=0.05,0.01,0.001\) in the case of \(\alpha=0.2,0.5,0.9\). The iterative step \(m=41\text{,}336, 94\text{,}936, 1\text{,}040\text{,}301\) for \(\alpha=0.2\), in the case of \(\alpha=0.5\), \(m=61\text{,}244, 235\text{,}887,167\text{,}709\) and \(m=80\text{,}991, 219\text{,}401,2\text{,}248\text{,}377\) for \(\alpha=0.9\).

Figure 1
figure 1

The comparison of numerical effects between the exact solution and its regularized solution for Example 1 .

Example 3

Consider the following discontinuous function:

$$ f(x)= \textstyle\begin{cases} 0, & 0\leq x\leq0.2, \\ 1, &0.2< x\leq0.4, \\ 0, &0.4< x\leq0.6, \\ -1, &0.6< x\leq0.8, \\ 0, &0.8< x\leq1. \end{cases} $$
(5.11)

Figure 2 shows the comparison between the exact solution and its regularized solution for various noise levels \(\varepsilon=0.05,0.01,0.001\) in the case of \(\alpha=0.2,0.5,0.9\). The iterative step \(m=46\text{,}890,213\text{,}120,902\text{,}941\) for \(\alpha=0.2\), in the case of \(\alpha=0.5\), \(m=48\text{,}184,248\text{,}462,1\text{,}514\text{,}062\) and \(m=47\text{,}244, 182\text{,}142, 1\text{,}782\text{,}127\) for \(\alpha=0.9\).

Figure 2
figure 2

The comparison of numerical effects between the exact solution and its regularized solution for Example 2 .

Figure 3 shows the comparison between the exact solution and its regularized solution for various noise levels \(\varepsilon =0.05,0.01,0.001\) in the case of \(\alpha=0.2,0.5,0.9\). The iterative step \(m=293\text{,}388, 3\text{,}111\text{,}053, 70\text{,}256\text{,}177\) for \(\alpha=0.2\), in the case of \(\alpha=0.5\), \(m=289\text{,}029, 3\text{,}059\text{,}035, 69\text{,}693\text{,}245 \) and \(m=446\text{,}047, 2\text{,}986\text{,}943, 69\text{,}181\text{,}622\) for \(\alpha=0.9\).

Figure 3
figure 3

The comparison of numerical effects between the exact solution and its regularized solution for Example 3 .

In Figures 1-3, we see that the smaller ε and α, the better the regularized solution is. Moreover, we see that the a posteriori parameter choice also works well.

Example 4

Take source function \(f(x,y)=xy\).

In Example 4, we take \(T=0.5\), \(M_{1}=M_{2}=25\), \(N=50\) and \(l_{1}=l_{2}=1\). Figure 4 shows the comparison between the exact solution and its regularized solution for various noise levels \(\varepsilon=0.01,0.001\) in the case of \(\alpha =0.2\). The iterative step \(m=443\text{,}531\) for \(\varepsilon=0.01\), \(m=6\text{,}572\text{,}432\) when the error level \(\varepsilon=0.001\).

Figure 4
figure 4

The comparison of numerical effects between the exact solution and its regularized solution for Example 4 .

Example 5

Take source function \(f(x,y)=\sin(x)\sin(y)+\sin(2x)\sin(2y)\).

In Example 5, we take \(T=1\), \(M_{1}=M_{2}=40\), \(N=60\) and \(l_{1}=l_{2}=\pi \). Figure 5 shows the comparison between the exact solution and its regularized solution for various noise levels \(\varepsilon=0.01,0.001\) in the case of \(\alpha =0.5\). The iterative step \(m=354\) for \(\varepsilon=0.01\), \(m=538\) when error level \(\varepsilon=0.001\).

Figure 5
figure 5

The comparison of numerical effects between the exact solution and its regularized solution for Example 5 .

In Figure 4 and Figure 5, we see that the numerical results are in good agreement with the exact shape. We see that the smaller ε, the better the computed approximation is.

6 Conclusion

In this paper, we consider an inverse problem for identifying an unknown source for a time-fractional diffusion equation with variable coefficients defined in a general domain. This problem is ill-posed, i.e., the solution (if it exists) does not depend continuous on the data. The Landweber regularization is first used to solve this problem. Moreover, under two regularization parameter choice rules, we obtain Hölder type error estimates. Especially, the a posteriori regularization parameter choice is selected. In [37], the authors used two regularization methods to identify the spatial variable source for the time-fraction diffusion equation. In [38], the authors used quasi-reversibility regularization methods to identify the spatial variable source for the time-fraction diffusion equation. From [37], under the a priori regularization parameter choice rule, the authors found the orders of error estimate convergence are \(\mathcal{O}(\delta^{\frac{p}{p+2}})\) (\(0< p\leq4\)) and \(\mathcal{O}(\delta^{\frac{2}{3}})\) (\(p>4\)) and under the a posteriori regularization parameter choice rule, the authors found the orders of error estimate convergence are \(\mathcal{O}(\delta^{\frac {p}{p+2}})\) (\(0< p\leq2\)) and \(\mathcal{O}(\delta^{\frac{2}{3}})\) (\(p>2\)). In [38], under the a priori and a posteriori regularization parameter choice rules, the authors found the orders of error estimate convergence are \(\mathcal{O}(\delta^{\frac {p}{p+2}})\) (\(0< p\leq2\)) and \(\mathcal{O}(\delta^{\frac{2}{3}})\) (\(p>2\)), but in our paper, under the a priori and a posteriori regularization parameter choice rules, we found the order of error estimate convergence is \(\mathcal{O}(\delta^{\frac{p}{p+2}})\). Comparing references [37, 38], under the a posteriori regularization parameter choice, as \(p>2\), the authors found the error estimate convergence is \(\mathcal{O}(\delta^{\frac{1}{2}})\) (\(p>2\)), which is a saturating phenomenon, i.e., if we add the smoothness of the solution, the error estimate order does not improve. In our method, the error estimate convergence is \(\mathcal{O}(\delta^{\frac {p}{p+2}})\), which does not appear to be a saturating phenomenon. Finally, three numerical results show that the Landweber iterative method is very effective for this kind of ill-posed problems.