1 Introduction

In the last decade, the fractional derivative was used in many physical problems, see [19]. One type of newly defined fractional derivatives without a singular kernel has been suggested, namely, the fractional derivative that was defined by Atangana and Baleanu. The Atangana–Baleanu fractional derivative definition uses a Mittag-Leffler function as a nonlocal kernel. This fractional derivative is more suitable for modeling the fact problems than classical derivatives. The derivative has several interesting properties that are useful for modeling in many branches of sciences, with applications in real-world problems, see [10, 11]. For instance, Atangana and Baleanu have studied some useful properties of the new derivative and applied them to solving the fractional heat transfer model, see [12]; they applied them to the model of groundwater within an unconfined aquifer, see [13]. Alkahtani et al. used the Atangana–Baleanu derivative to research Chua’s circuit model, see [14]. Although there have been many research results on ordinary differential equations for this ABC-fractional derivative, the results on partial differential equations for this derivative are also limited. Especially, the results for the problem of determining the source function are almost not found in recent years. Therefore, we focus on the fractional diffusion equation with the fractional derivative of Atangana–Baleanu to determine an unknown source term as follows:

$$\begin{aligned} \textstyle\begin{cases} {}^{\mathrm{ABC}}_{0}D_{t}^{\gamma }u(x,t) - \Delta u(x,t) = \varPhi (t) f(x), \quad (x,t) \in \varOmega \times (0,T), \\ u(x,t) = 0, \quad (x) \in \partial \varOmega , t \in (0,T], \\ u(x,0) = 0,\quad x \in \varOmega . \end{cases}\displaystyle \end{aligned}$$
(1.1)

Here, the Atangana–Baleanu fractional derivative \({}^{\mathrm{ABC}}_{0} D_{t}^{\gamma }u(x,t)\) is defined by

$$\begin{aligned} {}^{\mathrm{ABC}}_{0}D_{t}^{\gamma }u(x,t) = \frac{L(\gamma )}{1-\gamma } \int _{0}^{t} \frac{\partial u(x,s)}{\partial s} E_{\gamma ,1} \biggl( \frac{-\gamma (t-s)^{\gamma }}{1-\gamma } \biggr) \,ds, \end{aligned}$$
(1.2)

where the normalization function \(L(\gamma )\) can be any function satisfying the conditions \(L(\gamma ) = 1 - \gamma + \frac{\gamma }{\varGamma (\gamma )}\), here \(L(0) = L(1) = 1\) (see Definition 2.1 in [15]) and \(E_{\gamma ,1}\) is the Mittag-Leffler function which is introduced later in Sect. 2. Our inverse source problem is finding \(f(x)\) from the given data Φ and the measured data at the final time \(u(x,T)= g(x) \), \(g \in L^{2}(\varOmega )\).

In practice, the exact data \((\varPhi ,g)\) is noised by observation data \((\varPhi _{\epsilon }, g_{\epsilon })\) with order of \(\epsilon >0\)

$$\begin{aligned} \Vert \varPhi _{\epsilon }-\varPhi \Vert _{L^{\infty }(0,T)} < \epsilon , \quad\quad \Vert g_{ \epsilon }-g \Vert _{L^{2}(\varOmega )} < \epsilon , \end{aligned}$$
(1.3)

where \(\Vert \varXi \Vert _{L^{\infty }(0,T)}= {\sup }_{0\le t \le T} \vert \varXi (t) \vert \) for any \(\varXi \in L^{\infty }(0,T)\).

In the sense of Hadamard, the inverse source problem (1.1) with the observation data satisfies that (1.3) is ill-posed in general, i.e., a solution does not depend continuously on the input data \((\varPhi ,g)\). It means that if the noise level of ϵ is small, we have a large error in the sought solution f. It makes a troublesome numerical computation. Therefore, a regularization method is required.

The goal of this paper is to determine the source function f from the observation of \(g(x)\) at a final time \(t=T\) by \(g_{\epsilon }\) with a noise level of ϵ. To the best of author’s knowledge, there are no results for the Atangana–Baleanu fractional derivative to solve the inverse source problem (1.1). Motivated by the ideas mentioned above, in this work, to solve the fractional inverse source problem, we apply the generalized Tikhonov method with variable coefficients in a general bounded domain. We present the estimation of the convergence rate under an a priori bound assumption of the exact solution and an a priori parameter choice rule. Hence some regularization methods are required for stable computation of a sought solution. The inverse source problem attracted many authors, and its physical background can be found in [16], Wei et al. [1719] Kirane et al. [20, 21]. In [22], Sümeyra Uçar et al. and his group studied mathematical analysis and numerical scheme for a smoking model with Atangana–Baleanu fractional derivative. In this paper, the authors meticulously study mathematical models for analyzing the dynamics of the smoking model with ABC fractional derivative, the existence and uniqueness of problem (1.1) to the relevant model are tested by fixed point theory. The numerical results are implemented by giving some illustrative graphics including the variation of fractional order.

The content of this paper is divided into six sections as follows. In general, we introduce our problem in Sect. 1. In the second section, some preliminary results are shown. In Sect. 3, we present the ill-posedness of the fractional inverse source problem (1.1) and conditional stability. In Sect. 4, we propose a generalized Tikhonov regularization method. Moreover, in this section, we show convergence estimate under an a priori assumption. Next, we consider a numerical example to verify our proposed regularized method in Sect. 5. Finally, in Sect. 6, we give some comments as a conclusion.

2 Preliminary results

Definition 2.1

(Hilbert scale space, see [23])

First, let the spectral problem

$$\begin{aligned} \textstyle\begin{cases} \Delta \mathrm{e}_{k}(x) = -\lambda _{k} \mathrm{e}_{k}(x), \quad x \in \varOmega , \\ \mathrm{e}_{k}(x) = 0, \quad x \in \partial \varOmega , \end{cases}\displaystyle \end{aligned}$$

admit the eigenvalues

$$\begin{aligned} 0< \lambda _{1} \leq \lambda _{2} \leq \cdots \le \lambda _{k} \le \cdots \quad \text{with } \lambda _{k} \to \infty \text{ for } k \to \infty . \end{aligned}$$

The corresponding eigenfunctions \(\mathrm{e}_{k} \in H_{0}^{1}(\varOmega )\). The Hilbert scale space \(\mathbb{H}^{m+1} \) (\(m >0\)) is defined by

$$\begin{aligned} \mathbb{H}^{m+1}(\varOmega ) := \Biggl\{ f \in L^{2}(\varOmega ) : \sum_{k=1}^{ \infty } \lambda _{k}^{2(m+1)} \langle f,{\mathrm{e}}_{k} \rangle _{L^{2}( \varOmega )}^{2} < \infty \Biggr\} , \end{aligned}$$
(2.1)

with the norm

$$\begin{aligned} \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )}^{2} = \sum _{k=1}^{\infty } \lambda _{k}^{2(m+1)} \bigl\vert \langle f,{\mathrm{e}}_{k} \rangle _{L^{2}( \varOmega )} \bigr\vert ^{2} < \infty . \end{aligned}$$
(2.2)

Let X be a Hilbert space, we denote by \(C ( [0,T ];X )\) and \(L^{p} (0,T;X )\) the Banach spaces of measurable real functions \(f:[0,T]\to X\) measurable such that

$$\begin{aligned}& \Vert f \Vert _{L^{p} (0,T;X )}= \biggl( \int _{0}^{T} \bigl\Vert f (t ) \bigr\Vert _{X}^{p}\,dt \biggr)^{1/p}< \infty ,\quad 1\le p< \infty , \\& \Vert f \Vert _{L^{\infty } (0,T;X )}= \mathop{\operatorname{ess}\sup } _{0\le t \le T} \bigl\Vert f (t ) \bigr\Vert _{X}< \infty ,\quad p= \infty , \end{aligned}$$

and

$$\begin{aligned} \Vert f \Vert _{C ( [0,T ];X )}= \sup_{0\le t\le T} \bigl\Vert f (t ) \bigr\Vert _{X}< \infty . \end{aligned}$$

Lemma 2.1

([24])

The definition of the Mittag-Leffler function is as follows:

$$\begin{aligned} E_{\alpha ,\beta }(z) = \sum_{k=0}^{\infty } \frac{z^{k}}{\varGamma (\alpha k + \beta )}, \quad z \in {\mathbb{C}}, \end{aligned}$$
(2.3)

whereα, βare arbitrary constants.

Lemma 2.2

([25])

For\(\beta > 0\)and\(\alpha \in \mathbb {R} \), we obtain

$$\begin{aligned} E_{\beta ,\alpha }(y) = y E_{\beta ,\beta + \alpha }(y) + \frac{1}{\varGamma (\alpha )}, \quad y \in {\mathbb{C}}. \end{aligned}$$
(2.4)

Lemma 2.3

([25])

Let\(\xi > 0\), then we obtain

$$\begin{aligned} \frac{d}{dt} E_{\gamma ,1}\bigl(-\xi t^{\gamma }\bigr) = -\xi t^{\gamma -1} E_{ \gamma ,\gamma }\bigl(-\xi t^{\gamma }\bigr), \quad t > 0, 0 < \gamma < 1. \end{aligned}$$
(2.5)

Lemma 2.4

([25])

For\(0 < \gamma < 1 \)and\(\zeta > 0\), we obtain\(0 < E_{\gamma ,\gamma }(-\zeta ) < \frac{1}{\varGamma (\gamma )}\). However, \(E_{\gamma ,\gamma }\)is a monotonic decreasing function with\(\zeta >0\).

Lemma 2.5

([24])

Let\(0 < \gamma _{0} < \gamma _{1} < 1\). Then there exist positive constants\(A_{1}\), \(A_{2}\), \(A_{3}\)depending only on\(\gamma _{0}\), \(\gamma _{1}\)such that, for all\(\gamma \in [\gamma _{0}, \gamma _{1}] \)and

$$\begin{aligned} \frac{A_{1}}{1+y} \le E_{\gamma ,1}(-y) \le \frac{A_{2}}{1+y}, \quad\quad E_{ \gamma ,\alpha }(-y) \le \frac{A_{3}}{1+y} \quad \textit{for all } y \ge 0, \alpha \in \mathbb{R}. \end{aligned}$$
(2.6)

Lemma 2.6

([25])

For any\(\lambda _{k}\)satisfying\(\lambda _{k} \geq \lambda _{1} > 0\), there exist positive constants\(A_{4}\)depending onγ, T, \(\lambda _{1}\)such that

$$\begin{aligned} \frac{A_{4}}{\lambda _{k}T^{\gamma }} \leq E_{\gamma ,\gamma +1}\bigl(- \lambda _{k}T^{\gamma } \bigr) \leq \frac{1}{\lambda _{k}T^{\gamma }}. \end{aligned}$$
(2.7)

Lemma 2.7

For\(\gamma \in (0,1)\)and\(\lambda _{k} \ge \lambda _{1}\), \(\forall k > 1\), one obtains

$$\begin{aligned}& (\mathrm{a})\quad \frac{1-\gamma }{\gamma } \leq \frac{L(\gamma ) + \lambda _{k} (1-\gamma )}{\gamma \lambda _{k}} \leq \frac{L(\gamma )}{\gamma \lambda _{1}} + \frac{1-\gamma }{\gamma }. \\& (\mathrm{b})\quad \frac{ (L(\gamma ) + \lambda _{k}(1-\gamma ) )^{2}}{\gamma L(\gamma )} \leq \frac{ ( \frac{L(\gamma )}{\lambda _{k}} + (1-\gamma ) )^{2}}{\gamma L(\gamma )} \lambda _{k}^{2} \leq \frac{ ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{\gamma L(\gamma )} \lambda _{k}^{2} . \end{aligned}$$

Lemma 2.8

For any\(\lambda _{1} < \lambda _{k}\)\(\forall k \in {\mathbb{N}}\)and\(\gamma \in (0,1)\), we denote

$$\begin{aligned}& \begin{gathered} A_{k}(\gamma ) = \bigl( \gamma L(\gamma ) \bigr)^{-1} \bigl( L(\gamma ) + \lambda _{k} (1-\gamma ) \bigr)^{2}, \\ H_{\gamma }(\lambda _{k},s) = E_{\gamma ,\gamma } \biggl( - \frac{\gamma \lambda _{k} (T-s)^{\gamma }}{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) (T-s)^{\gamma -1}. \end{gathered} \end{aligned}$$
(2.8)

Using Lemma 2.7, we obtain

$$\begin{aligned} \biggl( \frac{1-\gamma }{\gamma } \biggr) \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr) &\leq \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \,ds \\ &\leq \frac{L(\gamma ) + \lambda _{k} (1-\gamma )}{\gamma \lambda _{k}}. \end{aligned}$$
(2.9)

Proof

For \(E_{\gamma , \gamma }(-y) \geq 0\) for \(0 < \gamma < 1\) and \(y \geq 0\), we obtain

$$\begin{aligned} \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \,ds &\geq \biggl( \frac{L(\gamma ) + \lambda _{k} (1-\gamma )}{\gamma \lambda _{k}} \biggr) \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr) \\ &\geq \biggl( \frac{1-\gamma }{\gamma } \biggr) \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr) \end{aligned}$$
(2.10)

and

$$\begin{aligned} \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \,ds &= - \frac{L(\gamma ) + \lambda _{k}(1-\gamma ) }{\gamma \lambda _{k}} \int _{0}^{T} \frac{d}{ds} \biggl( E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{k}(T-s)^{\gamma }}{L(\gamma ) + \lambda _{k} (1-\gamma ) } \biggr) \biggr) \,ds \\ &= \frac{L(\gamma ) + \lambda _{k}(1-\gamma ) }{\gamma \lambda _{k}} \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{k} T^{\gamma }}{L(\gamma ) + \lambda _{k} (1-\gamma )} \biggr) \biggr) \\ &\leq \frac{L(\gamma ) + \lambda _{k} (1-\gamma )}{\gamma \lambda _{k}}. \end{aligned}$$
(2.11)

 □

Lemma 2.9

Assume that there exist positive constants\(\vert \varPhi _{0} \vert \), \(\Vert \varPhi \Vert _{L^{\infty }(0,T)}\)such that\(\vert \varPhi _{0} \vert \le \vert \varPhi (t) \vert \le \Vert \varPhi \Vert _{L^{\infty }(0,T)}\)\(\forall t \in [0,T]\). Choosing\(\varepsilon \in (0, \frac{ \vert \varPhi _{0} \vert }{4} )\), we obtain

$$\begin{aligned} \frac{ \vert \varPhi _{0} \vert }{4} \le \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert \le \mathcal{S} \bigl( \vert \varPhi _{0} \vert , \Vert \varPhi \Vert _{L^{\infty }(0,T)} \bigr). \end{aligned}$$
(2.12)

Proof

We notice that

$$\begin{aligned} \bigl\vert \varPhi (t) \bigr\vert \le \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert + \bigl\vert \varPhi (t) - \varPhi _{\varepsilon }(t) \bigr\vert \le \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert + \Vert \varPhi _{\varepsilon } - \varPhi \Vert _{L^{\infty }(0,T)} \le \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert + \varepsilon . \end{aligned}$$
(2.13)

From (2.13), we obtain

$$\begin{aligned} \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert \ge \bigl\vert \varPhi (t) \bigr\vert - \varepsilon \ge \vert \varPhi _{0} \vert - \varepsilon \ge \frac{ \vert \varPhi _{0} \vert }{4} . \end{aligned}$$
(2.14)

Similarly, we get

$$\begin{aligned} \bigl\vert \varPhi _{\varepsilon }(t) \bigr\vert \le \Vert \varPhi \Vert _{L^{\infty }(0,T)} + \varepsilon < \Vert \varPhi \Vert _{L^{\infty }(0,T)} + \frac{ \vert \varPhi _{0} \vert }{4}. \end{aligned}$$
(2.15)

Denoting \(\mathcal{S} ( \vert \varPhi _{0} \vert , \Vert \varPhi \Vert _{L^{\infty }(0,T)} ) = \Vert \varPhi \Vert _{L^{\infty }(0,T)} + \frac{ \vert \varPhi _{0} \vert }{4}\), combining (2.14) and (2.15) leads to (2.12) holds. □

3 Regularization and error estimate for unknown source (1.1)

Assume that problem (1.1) has a solution u which has the form \(u(x,t) = \sum_{k=1}^{\infty } u_{k}(t) \mathrm{e}_{k}(x)\) with \(u_{k}(t) = \langle u(x,t), \mathrm{e}_{k}(x) \rangle \), then we have the fractional integro-differential equation involving the Atangana–Baleanu fractional derivative in the form

$$\begin{aligned} {}^{\mathrm{ABC}}_{0}D_{t}^{\gamma }u(x,t) - \Delta u(x,t) = \varPhi (t) f(x), \end{aligned}$$
(3.1)

and the following condition \(u_{k}(0) = \langle u_{0}(x), \mathrm{e}_{k}(x)\rangle \). We have the solution of the initial value problem as follows (see [26]):

$$\begin{aligned} u_{k}(t) &= \biggl( \frac{L(\gamma )}{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) E_{ \gamma ,1} \biggl( \frac{-\gamma \lambda _{k} t^{\gamma } }{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) \bigl\langle u_{0}(x), \mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) \\ &\quad{} + \sum_{k=1}^{\infty } \biggl( \frac{1-\gamma }{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) \varPhi (t) \bigl\langle f(x), \mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) + \sum_{k=1}^{\infty } \frac{\gamma L(\gamma )}{ ( L(\gamma ) + \lambda _{k} (1- \gamma ) )^{2}} \\ &\quad{}\times \biggl( \int _{0}^{t} E_{\gamma ,\gamma } \biggl( - \frac{\gamma \lambda _{k} (t-s)^{\gamma }}{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) (t-s)^{\gamma -1} \varPhi (s) \bigl\langle f(x), \mathrm{e}_{k}(x) \bigr\rangle \,ds \biggr) . \end{aligned}$$
(3.2)

From (3.2) we obtain

$$\begin{aligned} u(x,t) &= \sum_{k=1}^{\infty } \biggl( \frac{L(\gamma )}{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) E_{ \gamma ,1} \biggl( \frac{-\gamma \lambda _{k} t^{\gamma } }{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) \bigl\langle u_{0}(x), \mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) \\ &\quad{} + \sum_{k=1}^{\infty } \biggl( \frac{1-\gamma }{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) \varPhi (t) \bigl\langle f(x), \mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) + \sum_{k=1}^{\infty } \frac{\gamma L(\gamma )}{ ( L(\gamma ) + \lambda _{k} (1- \gamma ) )^{2}} \\ &\quad{}\times \biggl( \int _{0}^{t} E_{\gamma ,\gamma } \biggl( - \frac{\gamma \lambda _{k} (t-s)^{\gamma }}{L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) (t-s)^{\gamma -1} \varPhi (s) \bigl\langle f(x), \mathrm{e}_{k}(x) \bigr\rangle \,ds \biggr) \mathrm{e}_{k}(x). \end{aligned}$$
(3.3)

From (3.3), applying \(u(x,0) = 0\) and letting \(t=T\), we have

$$\begin{aligned} u(x,T) &= \sum_{k=1}^{\infty } \frac{1-\gamma }{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \varPhi (T) \bigl\langle f(x), \mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) + \sum_{k=1}^{ \infty } \frac{\gamma L(\gamma )}{ ( L(\gamma ) + \lambda _{k} (1- \gamma ) )^{2}} \\ &\quad{}\times \biggl( \int _{0}^{T} E_{\gamma ,\gamma } \biggl( - \frac{\gamma \lambda _{k} (T-s)^{\gamma }}{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) (T-s)^{\gamma -1} \varPhi (s) \bigl\langle f(x), \mathrm{e}_{k}(x)\bigr\rangle \,ds \biggr) \mathrm{e}_{k}(x). \end{aligned}$$
(3.4)

Next, replacing \(u(x,T) = g(x)\) and adding to \(\varPhi (T)=0\), we get

$$\begin{aligned} \begin{aligned}[b] g(x) ={}&\sum_{k=1}^{\infty } \frac{\gamma L(\gamma ) \langle f(x),\mathrm{e}_{k}(x) \rangle }{ ( L(\gamma ) + \lambda _{k} (1- \gamma ) )^{2}} \\ &{}\times\biggl( \int _{0}^{T} E_{\gamma ,\gamma } \biggl( - \frac{\gamma \lambda _{k} (T-s)^{\gamma }}{ L(\gamma ) + \lambda _{k} (1- \gamma ) } \biggr) (T-s)^{\gamma -1} \varPhi (s) \,ds \biggr) \mathrm{e}_{k}(x). \end{aligned} \end{aligned}$$
(3.5)

A simple transformation gives

$$\begin{aligned} \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle = \frac{ ( \gamma L(\gamma ) )^{-1} (L(\gamma ) + \lambda _{k} (1-\gamma ) )^{2} }{ ( \int _{0}^{T} E_{\gamma ,\gamma } ( - \frac{\gamma \lambda _{k} (T-s)^{\gamma }}{ L(s) + \lambda _{k} (1- \gamma ) } ) (T-s)^{\gamma -1} \varPhi (s) \,ds )} \bigl\langle g(x),\mathrm{e}_{k}(x) \bigr\rangle . \end{aligned}$$
(3.6)

From (3.6), we can see that

$$\begin{aligned} f(x) = \sum_{k=1}^{\infty } \frac{ ( \gamma L(\gamma ) )^{-1} ( L(\gamma ) + \lambda _{k} (1-\gamma ) )^{2} }{ ( \int _{0}^{T} E_{\gamma ,\gamma } ( - \frac{\gamma \lambda _{k} (T-s)^{\gamma }}{ L(\gamma ) + \lambda _{k} (1- \gamma ) } ) (T-s)^{\gamma -1} \varPhi (s) \,ds ) } \bigl\langle g(x),\mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x). \end{aligned}$$
(3.7)

Next, we recall \(A_{k}(\gamma )\) and \(H_{\gamma }(\lambda _{k},s)\) in Lemma 2.8, we have the source function f as follows:

$$\begin{aligned} f(x) = \sum_{k=1}^{\infty } \frac{ \langle g(x),\mathrm{e}_{k}(x) \rangle \mathrm{e}_{k}(x) }{ [A_{k}(\gamma ) ]^{-1} ( \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s)\,ds ) }. \end{aligned}$$
(3.8)

3.1 The ill-posedness of the inverse source problem

Theorem 3.1

The unknown source problem (1.1) is not well-posed.

Proof

First of all, we define a linear operator as follows:

$$\begin{aligned} \mathcal{P}f(x) &= \sum_{k=1}^{\infty } \biggl[ \int _{0}^{T} H_{ \gamma }(\lambda _{k},s) \varPhi (s) \,ds \biggr] \bigl[A_{k}(\gamma ) \bigr]^{-1} \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \mathrm{e}_{k}(x) \\ &= \int _{\varOmega } g(x,\xi ) f(\xi ) \,d\xi , \end{aligned}$$
(3.9)

in which

$$\begin{aligned} g(x,\omega ) = \sum_{k=1}^{\infty } \biggl[ \int _{0}^{T}H_{ \gamma }(\lambda _{k},s) \varPhi (s) \,ds \biggr] \bigl[A_{k}(\gamma ) \bigr]^{-1} \mathrm{e}_{k}(x) \mathrm{e}_{k}( \omega ). \end{aligned}$$
(3.10)

From the property \(k(x,\omega ) = k(\omega ,x)\), we can see that \(\mathcal{P}\) is a self-adjoint operator. In the next step, we prove its compactness. To do this, we define the finite rank operator \(\mathcal{P}_{N}\) as follows:

$$\begin{aligned} \mathcal{P}_{N}f(x) = \sum _{k=1}^{N} \biggl[ \int _{0}^{T} H_{ \gamma }(\lambda _{k},s) \varPhi (s) \,ds \biggr] \bigl[A_{k}(\gamma ) \bigr]^{-1} \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \mathrm{e}_{k}(x). \end{aligned}$$
(3.11)

Then, from (3.9) and (3.11), we obtain

$$\begin{aligned} \Vert \mathcal{P}_{N}f - \mathcal{P}f \Vert _{L^{2}(\varOmega )}^{2} &= \sum_{k=N+1}^{\infty } \biggl[ \int _{0}^{T} H_{\gamma }( \lambda _{k},s) \varPhi (s) \,ds \biggr]^{2} \bigl[A_{k}( \gamma ) \bigr]^{-2} \bigl\vert \bigl\langle f(x), \mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \\ &\le \Vert \varPhi \Vert ^{2}_{L^{\infty }(0,T)} \sum _{k=N+1}^{\infty } \frac{\gamma [L(\gamma )]^{2}}{ \lambda _{k}^{2} (L(\gamma )+\lambda _{k}(1-\gamma ) )^{2} } \bigl\vert \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \\ &\le \frac{\gamma [L(\gamma )]^{2} \Vert \varPhi \Vert ^{2}_{L^{\infty }(0,T)}}{\lambda _{N}^{2} (L(\gamma )+\lambda _{N}(1-\gamma ) )^{2} } \sum_{k=N+1}^{\infty } \bigl\vert \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2}. \end{aligned}$$
(3.12)

This implies that

$$\begin{aligned} \Vert \mathcal{P}_{N}f - \mathcal{P}f \Vert _{L^{2}(\varOmega )} \le \frac{\gamma ^{0.5} L(\beta ) \Vert \varPhi \Vert _{L^{\infty }(0,T)} }{\lambda _{N} (L(\gamma ) + \lambda _{N}(1-\gamma ) ) } \Vert f \Vert _{L^{2}(\varOmega )}. \end{aligned}$$
(3.13)

At this stage, \(\Vert \mathcal{P}_{N} - \mathcal{P} \Vert _{L^{2}(\varOmega )} \to 0\) in the sense of operator norm in \(L(L^{2}(\varOmega );L^{2}(\varOmega ))\) as \(N \to \infty \). Moreover, \(\mathcal{P}\) is a compact operator.

Next, the singular values for the linear self-adjoint compact operator \(\mathcal{P}\) are

$$\begin{aligned} \varXi _{k} = \biggl[ \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \biggr] \bigl[A_{k}(\gamma ) \bigr]^{-1}, \end{aligned}$$
(3.14)

and \({\mathrm{e}}_{k}\) are corresponding eigenvectors; we also know it as an orthonormal basis in \(L^{2}(\varOmega )\). From (3.9), what we introduced above can be formulated as

$$\begin{aligned} \mathcal{P} f (x) = g(x), \end{aligned}$$
(3.15)

by Kirsch [27].

We give an example to illustrate the ill-posedness of our problem. Let us choose the input final data. Indeed, let \(\overline{g}_{j}\) be as follows \(\overline{g}_{j}:= \lambda _{j}^{-1/2} \mathrm{e}_{j}\). First, we assume that the other input final data \(g=0\). Then, using (3.7), the source term corresponding to g is \(f=0\). We obtain the following error in the \(L^{2}\) norm:

$$\begin{aligned} \Vert \overline{g}_{j}-g \Vert _{L^{2}(\varOmega )}= \bigl\Vert \lambda _{j}^{-1/2} {\mathrm{e}}_{j}(x) \bigr\Vert _{L^{2}(\varOmega )}= \lambda _{j}^{-1/2} \to 0, \quad \text{as } j \to \infty . \end{aligned}$$
(3.16)

And the source term corresponding to \(\widetilde{g}_{j}\) is

$$\begin{aligned} \overline{f}_{j}(x) &= \sum_{k=1}^{\infty } \frac{ \langle \overline{g}_{j}(x),{\mathrm{e}}_{k}(x)\rangle {\mathrm{e}}_{k}(x)}{ [A_{k}(\gamma )]^{-1} ( \int _{0}^{T} H_{\gamma }(\lambda _{k},s)\varPhi (s) \,ds ) } \\ &= \sum_{k=1}^{\infty } \frac{\langle \lambda _{j}^{-1/2}\mathrm{e}_{j}(x),{\mathrm{e}}_{k}(x)\rangle {\mathrm{e}}_{k}(x)}{ [A_{k}(\gamma )]^{-1} ( \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds ) }. \end{aligned}$$
(3.17)

Using Lemma 2.8, we obtain

$$\begin{aligned} \overline{f}_{j}(x) &\geq \frac{ ( L(\gamma ) + \lambda _{j}(1-\gamma ) ) \lambda _{j}^{\frac{1}{2}} {\mathrm{e}}_{j}(x) }{ \Vert \varPhi \Vert _{L^{\infty }(0,T)} L(\gamma ) } = \frac{\frac{L(\gamma )}{\lambda _{j}} + (1-\gamma )}{ \Vert \varPhi \Vert _{L^{\infty }(0,T)} L(\gamma ) } \lambda _{j}^{\frac{3}{2}}{\mathrm{e}}_{j}(x). \end{aligned}$$
(3.18)

And the error estimation between f and \(\overline{f}_{j}\) is as follows:

$$\begin{aligned} \Vert \overline{f}_{j}-f \Vert _{L^{2}(\varOmega )}\geq \biggl\Vert \frac{\frac{L(\gamma )}{\lambda _{j}} + (1-\gamma )}{ \Vert \varPhi \Vert _{L^{\infty }(0,T)} L(\gamma ) } \lambda _{j}^{\frac{3}{2}} { \mathrm{e}}_{j}(x) \biggr\Vert _{L^{2}(\varOmega )}= \frac{\frac{L(\gamma )}{\lambda _{j}} + (1-\gamma )}{ \Vert \varPhi \Vert _{L^{\infty }(0,T)} L(\gamma ) } \lambda _{j}^{\frac{3}{2}}. \end{aligned}$$
(3.19)

Combining (3.16) and (3.19), we know that

$$\begin{aligned} \lim_{j \to +\infty } \Vert \overline{f}_{j}-f_{j} \Vert _{L^{2}( \varOmega )} \ge \lim_{j \to +\infty } \frac{\frac{L(\gamma )}{\lambda _{j}} + (1-\gamma )}{ \Vert \varPhi \Vert _{L^{\infty }(0,T)} L(\gamma ) } \lambda _{j}^{\frac{3}{2}} \longrightarrow +\infty . \end{aligned}$$
(3.20)

Thus our problem is ill-posed in the Hadamard sense in the \(L^{2}(\varOmega )\)-norm. □

3.2 Conditional stability of source term f

At the beginning of this section, we introduce a theorem to prove the stability condition.

Theorem 3.2

Let E be a positive number such that

$$\begin{aligned} \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )} \le E \quad \textit{for }E>0. \end{aligned}$$
(3.21)

Then

$$\begin{aligned} \Vert f \Vert _{L^{2}(\varOmega )} \le \mathcal{D} (\gamma ,\varPhi _{0}, \lambda _{1},T,m ) E^{\frac{1}{m+1}} \Vert g \Vert _{L^{2}(\varOmega )}^{ \frac{m}{m+1}}, \end{aligned}$$

whereby

$$\begin{aligned} \mathcal{D}(\gamma ,\varPhi _{0},\lambda _{1},T,m) = \biggl( \frac{\gamma ^{m} (\frac{\frac{L(\gamma )}{\lambda _{1}} + (1-\gamma )}{\beta L(\gamma )} )^{m+1} }{ (1-\gamma )^{m} \Vert \varPhi _{0} \Vert ^{m} (1 - E_{\gamma ,1} (\frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} ) )^{m} } \biggr)^{\frac{1}{m+ 1}}. \end{aligned}$$
(3.22)

Proof

Thanks to the Hölder inequality and (3.7), we get

$$\begin{aligned} \Vert f \Vert _{L^{2}(\varOmega )}^{2} &= \sum _{k=1}^{\infty } \biggl\vert \frac{A_{k}(\gamma )\langle g(x),\mathrm{e}_{k}(x)\rangle }{ \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds } \biggr\vert ^{2} \\ &= \sum_{k=1}^{\infty } \frac{ [A_{k}(\gamma )]^{2} \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{\frac{2}{m+1}} \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{\frac{2m}{m+1}} }{ \vert \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \vert ^{2} } \\ &\le \sum_{k=1}^{\infty } \bigl[A_{k}( \gamma ) \bigr]^{2} \biggl( \frac{ \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{2} }{ \vert \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \vert ^{2m+2} } \biggr)^{\frac{1}{m+1}} \Biggl(\sum_{k=1}^{\infty } \bigl\vert \bigl\langle g(x), \mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \Biggr)^{\frac{m}{m+1}} \\ &\le \sum_{k=1}^{\infty } \biggl( \frac{ A^{2(s+1)}_{k}(\gamma ) \vert \langle f(x),\mathrm{e}_{k}(x)\rangle \vert ^{2} }{ \vert \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \vert ^{2m} } \biggr)^{\frac{1}{m+1}} \Vert g \Vert _{L^{2}(\varOmega )}^{\frac{2m}{m+1}}. \end{aligned}$$
(3.23)

Using Lemma 2.8 part (c), we can easily see that

$$\begin{aligned} \begin{aligned}[b] & \biggl( \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s)\,ds \biggr)^{2m} \\ &\quad \ge \Vert \varPhi _{0} \Vert ^{2m} \biggl(\frac{ (1-\gamma )}{\gamma } \biggr)^{2m} \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{2m}, \end{aligned} \end{aligned}$$
(3.24)

and this inequality leads to

$$\begin{aligned} &\sum_{k=1}^{\infty } \frac{ A^{2(m+1)}_{k}(\gamma ) \vert \langle f(x),\mathrm{e}_{k}(x)\rangle \vert ^{2} }{ \vert \int _{0}^{T} H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \vert ^{2m} } \\ &\quad \le \sum_{k=1}^{\infty } \frac{ \gamma ^{2m} (\frac{\frac{L(\gamma )}{\lambda _{1}} + (1-\gamma )}{\gamma L(\gamma )} )^{2(m+1)} \lambda _{k}^{2(m+1)} \vert \langle f(x),\mathrm{e}_{k}(x)\rangle \vert ^{2}}{ (1-\gamma )^{2m} \Vert \varPhi _{0} \Vert ^{2m} (1 - E_{\gamma ,1} (\frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} ) )^{2m}}. \end{aligned}$$
(3.25)

Combining (3.23) and (3.25), we get

$$\begin{aligned} \Vert f \Vert _{L^{2}(\varOmega )}^{2} \le \mathcal{D}(\gamma ,\varPhi _{0}, \lambda _{1},T,m) \Vert f \Vert ^{\frac{2}{m+1}}_{\mathbb{H}^{m+1}( \varOmega )} \Vert g \Vert _{L^{2}(\varOmega )}^{\frac{2m}{m+1}} . \end{aligned}$$
(3.26)

 □

4 A generalized Tikhonov method

According to the ideas mentioned above, we apply the generalized Tikhonov regularization method to solve problem (1.1), which minimizes the function f satisfies

$$\begin{aligned} \mathcal{P}(f) = \Vert Kf-g \Vert ^{2} + \alpha ( \epsilon ) \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )}^{2}, \quad m \in \mathbb {R} ^{+}. \end{aligned}$$
(4.1)

Let \(f^{\alpha (\epsilon )}\) be a solution of problem (4.1) \(f^{\alpha (\epsilon )}\) satisfying

$$\begin{aligned} \mathcal{P}^{*}\mathcal{P} f^{\alpha (\epsilon )} + \alpha ( \epsilon ) (-\mathcal{A})^{m+1}f^{\alpha (\epsilon )} = \mathcal{P}^{*}g(x). \end{aligned}$$
(4.2)

From the operator \(\mathcal{P}\) is compact self-adjoint in [27], we obtain

$$\begin{aligned} f^{\alpha (\epsilon )}(x) = \sum_{k=1}^{\infty } \frac{ [A_{k}(\gamma ) ]^{-1} (\int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds ) }{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert [A_{k}(\gamma ) ]^{-1} ( \int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds ) \vert ^{2}} \bigl\langle g(x),\mathrm{e}_{k}(x)\bigr\rangle \mathrm{e}_{k}(x). \end{aligned}$$
(4.3)

If the measured data \((\varPhi _{\epsilon }(t), g_{\epsilon }(x))\) of \((\varPhi (t), g(x))\) with a noise level of ϵ satisfies

$$ \Vert g - g_{\epsilon } \Vert _{L^{2}(\varOmega )} < \epsilon , \quad\quad \Vert \varPhi - \varPhi _{\epsilon } \Vert _{L^{\infty }(0,T)} < \epsilon , $$
(4.4)

then we present the following regularized solution:

$$\begin{aligned} f^{\alpha (\epsilon )}_{\epsilon }(x) = \sum_{k=1}^{\infty } \frac{ [A_{k}(\gamma ) ]^{-1} (\int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi _{\epsilon }(s) \,ds ) }{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert [A_{k}(\gamma ) ]^{-1} ( \int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi _{\epsilon }(s) \,ds ) \vert ^{2}} \bigl\langle g_{\epsilon }(x),\mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x), \end{aligned}$$
(4.5)

and denote

$$\begin{aligned} P_{\gamma } (\lambda _{k},s,\varPhi ) = \bigl[A_{k}(\gamma ) \bigr]^{-1} \biggl( \int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi (s) \,ds \biggr). \end{aligned}$$
(4.6)

Therefore, from (4.3), (4.5), and (4.6), one has

$$\begin{aligned} f^{\alpha (\epsilon )}(x) = \sum_{k=1}^{\infty } \frac{P_{\gamma } (\lambda _{k},s,\varPhi )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2}} \bigl\langle g(x),{\mathrm{e}}_{k}(x)\bigr\rangle { \mathrm{e}}_{k}(x) \end{aligned}$$
(4.7)

and

$$\begin{aligned} f_{\epsilon }^{\alpha (\epsilon )}(x) = \sum _{k=1}^{\infty } \frac{P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2}} \bigl\langle g_{\epsilon }(x),{\mathrm{e}}_{k}(x)\bigr\rangle { \mathrm{e}}_{k}(x). \end{aligned}$$
(4.8)

4.1 Convergence estimates of the generalized Tikhonov regularization method under a priori parameter choice rules

As the main objectives of this section, we will prove an error estimation for \(\Vert f-f_{\epsilon }^{\alpha (\epsilon )} \Vert _{L^{2}(\varOmega )}\) and show the convergence rate by using an a priori choice rule for the regularization parameter.

Theorem 4.1

LetΦ, \(\varPhi _{\epsilon }\)satisfy Lemma 2.9. Assume an a priori bounded condition (3.21). Then the following estimate holds:

  1. (a)

    If\(0 < m < 3\)and choosing\(\alpha (\epsilon ) = (\frac{\epsilon }{E} )^{\frac{4}{m+1}}\)from (4.18) and (4.29), we receive

    $$\begin{aligned} \bigl\Vert f - f_{\epsilon }^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} \textit{ is of order } \epsilon ^{\frac{4}{m+5}}. \end{aligned}$$
    (4.9)
  2. (b)

    If\(m \geq 3\)and choosing\(\alpha (\epsilon ) = \frac{\epsilon }{E}\), from (4.18) and (4.29), we receive

    $$\begin{aligned} \bigl\Vert f - f_{\epsilon }^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} \textit{ is of order } \epsilon ^{\frac{1}{2}}. \end{aligned}$$
    (4.10)

Proof

From (4.3), (4.5) and using the triangle inequality, we get

$$\begin{aligned} \bigl\Vert f_{\epsilon }^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}(\varOmega )} &\le \underbrace{ \bigl\Vert f_{\epsilon }^{\alpha (\epsilon )} - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )}}_{ \mathcal{S}_{1} + \mathcal{S}_{2} + \mathcal{S}_{3}} + \underbrace{ \bigl\Vert f^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}(\varOmega )}}_{ \mathcal{I}_{2}} . \end{aligned}$$
(4.11)

We prove this theorem through the following two lemmas.

Lemma 4.1

Let us assume that (4.4) holds. Then we have the estimation as follows:

$$\begin{aligned} & \bigl\Vert f^{\alpha (\epsilon )}_{\epsilon } - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} \\ &\quad \le \frac{ \epsilon \Vert f \Vert _{L^{2}(\varOmega )} }{ \vert \varPhi _{0} \vert } + \frac{ (\gamma L(\gamma ) )^{2}}{ (L(\gamma ) + \lambda _{1}(1-\gamma ) )^{4}} \frac{ \epsilon \Vert f \Vert _{L^{2}(\varOmega )} }{ \vert \varPhi _{0} \vert } + \frac{\epsilon }{2 (\alpha (\epsilon ) \lambda _{1}^{m+1} )^{1/2} }. \end{aligned}$$
(4.12)

Proof

From (4.11), we have

$$\begin{aligned} & f_{\epsilon }^{\alpha (\epsilon )} - f^{\alpha (\epsilon )} \\ &\quad = \sum_{k=1}^{\infty } \biggl( \frac{P_{\gamma } (\lambda _{k},s,\varPhi )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2}} - \frac{P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) ) \vert ^{2}} \biggr) \bigl\langle g(x), \mathrm{e}_{k}(x)\bigr\rangle \mathrm{e}_{k}(x) \\ &\quad \quad{} + \sum_{k=1}^{\infty } \frac{P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2}} \bigl\langle g(x) - g_{\epsilon }(x),\mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x) \\ &\quad \leq \mathcal{S}_{1} + \mathcal{S}_{2} + \mathcal{S}_{3}, \end{aligned}$$
(4.13)

in which \(\mathcal{S}_{1}\), \(\mathcal{S}_{2}\), and \(\mathcal{S}_{3}\) are as follows:

$$\begin{aligned} \begin{aligned} &\mathcal{S}_{1}=\sum _{k=1}^{\infty } \frac{ \alpha (\epsilon ) \lambda _{k}^{m+1} P_{\gamma } (\lambda _{k},s,\varPhi -\varPhi _{\epsilon } ) \langle g(x),\mathrm{e}_{k}(x)\rangle \mathrm{e}_{k}(x)}{ ( \alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} ) ( \alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2} ) }, \\ &\mathcal{S}_{2} = \sum_{k=1}^{\infty } \frac{ P_{\gamma } (\lambda _{k},s,\varPhi ) P_{\gamma } (\lambda _{k},s,\varPhi ) P_{\gamma } (\lambda _{k},s,\varPhi -\varPhi _{\epsilon } ) \langle g(x),\mathrm{e}_{k}(x)\rangle \mathrm{e}_{k}(x) }{ ( \alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} ) ( \alpha (\epsilon ) \lambda _{k}^{m+1}+ \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2} )}, \\ &\mathcal{S}_{3} = \sum_{k=1}^{\infty } \frac{P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2}} \bigl\langle g(x) - g_{\epsilon }(x),\mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x). \end{aligned} \end{aligned}$$
(4.14)

Step 1: Estimating \(\Vert \mathcal{S}_{1} \Vert _{L^{2}(\varOmega )}\), using the inequality \(a^{2}+b^{2} \ge 2ab\), \(\forall a,b \ge 0\), we obtain

$$\begin{aligned} \Vert \mathcal{S}_{1} \Vert ^{2}_{L^{2}(\varOmega )} &\le \sum_{k=1}^{ \infty } \biggl( \frac{ \alpha (\epsilon ) \lambda _{k}^{m+1} \vert P_{\gamma } (\lambda _{k},s,\varPhi -\varPhi _{\epsilon } ) \vert }{ 4 \alpha (\epsilon ) \lambda _{k}^{m+1} \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert } \biggr)^{2} \bigl\vert \bigl\langle g(x),\mathrm{e}_{k}(x) \bigr\rangle \bigr\vert ^{2} \\ &\le \sum_{k=1}^{\infty } \biggl( \frac{ \vert \int _{0}^{T}H_{\gamma }(\lambda _{k},s) ( \varPhi (s) - \varPhi _{\epsilon }(s) ) \,ds \vert }{ 4 (\int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi _{\epsilon }(s) \,ds )} \biggr)^{2} \frac{ \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{2}}{ \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} } \\ &\le \frac{ \Vert \varPhi - \varPhi _{\epsilon } \Vert ^{2}_{L^{\infty }(0,T)} }{ \vert \varPhi _{0} \vert ^{2} } \sum_{k=1}^{\infty } \bigl\vert \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \\ &= \frac{ \Vert \varPhi - \varPhi _{\epsilon } \Vert ^{2}_{L^{\infty }(0,T)} }{ \vert \varPhi _{0} \vert ^{2} } \Vert f \Vert _{L^{2}(\varOmega )}^{2}. \end{aligned}$$
(4.15)

Step 2: Estimate \(\Vert \mathcal{S}_{2} \Vert _{L^{2}(\varOmega )}\) as follows:

$$\begin{aligned} \Vert \mathcal{S}_{2} \Vert ^{2}_{L^{2}(\varOmega )} &\le \sum_{k=1}^{ \infty } \frac{ [A_{k}(\gamma ) ]^{-2} \vert P_{\gamma } (\lambda _{k},s,\varPhi - \varPhi _{\epsilon } ) \vert ^{2} }{ ( \int _{0}^{T}H_{\gamma }(\lambda _{k},s) \varPhi _{\epsilon }(s) \,ds )^{2} } \frac{ \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{2}}{ \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} } \\ &\le \frac{ \Vert \varPhi - \varPhi _{\epsilon } \Vert ^{2}_{L^{\infty }(0,T)} }{ \vert \varPhi _{0} \vert ^{2} } \sum_{k=1}^{\infty } \frac{ (\gamma L(\gamma ) )^{4}}{ (L(\gamma ) + \lambda _{k}(1-\gamma ) )^{8}} \bigl\vert \bigl\langle f(x),\mathrm{e}_{k}(x) \bigr\rangle \bigr\vert ^{2} \\ &\le \frac{ \Vert \varPhi - \varPhi _{\epsilon } \Vert ^{2}_{L^{\infty }(0,T)} }{ \vert \varPhi _{0} \vert ^{2} } \frac{ (\gamma L(\gamma ) )^{4}}{ (L(\gamma ) + \lambda _{1}(1-\gamma ) )^{8}} \sum_{k=1}^{\infty } \bigl\vert \bigl\langle f(x),\mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \\ &= \frac{ (\gamma L(\gamma ) )^{4}}{ (L(\gamma ) + \lambda _{1}(1-\gamma ) )^{8}} \frac{ \Vert \varPhi - \varPhi _{\epsilon } \Vert ^{2}_{L^{\infty }(0,T)} }{ \vert \varPhi _{0} \vert ^{2} } \Vert f \Vert ^{2}_{L^{2}(\varOmega )}. \end{aligned}$$
(4.16)

Step 3: Finally, \(\Vert \mathcal{S}_{3} \Vert _{L^{2}(\varOmega )}\) can be bounded by

$$\begin{aligned} \Vert \mathcal{S}_{3} \Vert ^{2}_{L^{2}(\varOmega )} &\le \sum_{k=1}^{ \infty } \biggl\vert \frac{ P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } )}{\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi _{\epsilon } ) \vert ^{2}} \biggr\vert ^{2} \bigl\vert \bigl\langle g(x) - g_{\epsilon }(x),\mathrm{e}_{k}(x)\bigr\rangle \bigr\vert ^{2} \\ &\le \frac{1}{4\alpha (\epsilon ) \lambda _{1}^{m+1}} \sum_{k=1}^{ \infty } \bigl\vert \bigl\langle g(x) - g_{\epsilon }(x) , \mathrm{e}_{k}(x) \bigr\rangle \bigr\vert ^{2} \\ &= \frac{ \Vert g - g_{\epsilon } \Vert ^{2}_{L^{2}(\varOmega )}}{4\alpha (\epsilon ) \lambda _{1}^{m+1}} \le \frac{\epsilon ^{2}}{4 \alpha (\epsilon ) \lambda _{1}^{m+1} }. \end{aligned}$$
(4.17)

Combining (4.15) to (4.17), we obtain

$$\begin{aligned} & \bigl\Vert f_{\epsilon }^{\alpha (\epsilon )} - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} \\ &\quad \le \Vert \mathcal{S}_{1} \Vert _{L^{2}(\varOmega )}+ \Vert \mathcal{S}_{2} \Vert _{L^{2}(\varOmega )}+ \Vert \mathcal{S}_{3} \Vert _{L^{2}(\varOmega )} \\ &\quad \le \frac{ \epsilon \Vert f \Vert _{L^{2}(\varOmega )} }{ \vert \varPhi _{0} \vert } + \frac{ (\gamma L(\gamma ) )^{2}}{ (L(\gamma ) + \lambda _{1}(1-\gamma ) )^{4}} \frac{ \epsilon \Vert f \Vert _{L^{2}(\varOmega )} }{ \vert \varPhi _{0} \vert } + \frac{\epsilon }{2 (\alpha (\epsilon ) \lambda _{1}^{m+1} )^{1/2} }. \end{aligned}$$
(4.18)

The proof is completed. □

Next, we estimate the second term of \(\mathcal{I}_{2}\) as follows.

Lemma 4.2

Let\(f \in \mathbb{H}^{m+1}(\varOmega )\), subtract (4.7) and (3.8), we thus see that

$$\begin{aligned} \bigl\Vert f^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}(\varOmega )} \le \textstyle\begin{cases} {[\alpha (\epsilon )]^{\frac{1}{2}} }\lambda _{1}^{\frac{3-m}{2}} Q_{ \gamma }(\lambda _{1},T,E), &m \geq 3, \\ [\alpha (\epsilon ) ]^{\frac{m+1}{m+5}}Q_{\gamma }(\lambda _{1},T,E), &0< m < 3, \end{cases}\displaystyle \end{aligned}$$
(4.19)

in which\(Q_{\gamma } (\lambda _{1},T,E )\)is defined in (4.30).

Proof

By using Parseval’s equality, (3.7), and (4.3), we obtain

$$\begin{aligned} \mathcal{I}_{2}&:= \bigl\Vert f^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}( \varOmega )} \le \sum_{k=1}^{+\infty } \frac{ [\alpha (\epsilon ) \lambda _{k}^{m+1} ]^{2} \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{2} }{ \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} [\alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} ]^{2} } \\ &\le \sup_{k \in \mathbb{N}} \bigl\vert G(k) \bigr\vert ^{2} \sum_{k=1}^{+ \infty } \frac{ \lambda _{k}^{2(m+1)} \vert \langle g(x),\mathrm{e}_{k}(x)\rangle \vert ^{2} }{ \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} } \\ &\le \sup_{k \in \mathbb{N}} \bigl\vert G(k) \bigr\vert ^{2} \Vert f \Vert ^{2}_{\mathbb{H}^{m+1}( \varOmega )}, \end{aligned}$$
(4.20)

where

$$\begin{aligned} G(k) = \frac{ \alpha (\epsilon ) }{ \alpha (\epsilon ) \lambda _{k}^{m+1} + \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert ^{2} } . \end{aligned}$$
(4.21)

The function G can be bounded as follows:

$$\begin{aligned} G(k) &\le \frac{ \alpha (\epsilon ) }{ 2 (\alpha (\epsilon ) \lambda _{k}^{m+1} )^{\frac{1}{2}} \vert P_{\gamma } (\lambda _{k},s,\varPhi ) \vert } \leq \frac{[\alpha (\epsilon )]^{\frac{1}{2}}}{2\lambda _{k}^{\frac{m+1}{2}}P_{\gamma } (\lambda _{k},s,\varPhi )} \\ &\leq \lambda _{k}^{\frac{3-m}{2}} \frac{[\alpha (\epsilon )]^{\frac{1}{2}} ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{2\gamma L(\gamma ) \vert \varPhi _{0} \vert } \\ &\quad{}\times \biggl( \frac{1-\gamma }{\gamma } \biggr)^{-1} \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{-1}. \end{aligned}$$
(4.22)

From (4.22), we divide into two cases.

Case 1st:\(m \ge 3\). We have

$$\begin{aligned} \lambda _{k}^{\frac{3-m}{2}} = \frac{1}{\lambda _{k}^{\frac{m-3}{2}}} \le \frac{1}{\lambda _{1}^{\frac{m-3}{2}}}= \lambda _{1}^{ \frac{3-m}{2}}. \end{aligned}$$
(4.23)

Combining (4.20), (4.23), we obtain

$$\begin{aligned} \bigl\Vert f - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} &\le \frac{ {[\alpha (\epsilon )]^{\frac{1}{2}} } \lambda _{k}^{\frac{3-m}{2}} ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{2\gamma L(\gamma ) \vert \varPhi _{0} \vert } \biggl( \frac{1-\gamma }{\gamma } \biggr)^{-1} \\ &\quad{} \times \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{-1} \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )}. \end{aligned}$$
(4.24)

Case 2nd:\(0 < m < 3\). We set \(\mathbb{N} = \mathbb{V}_{1} \cup \mathbb{V}_{2}\) and choose any such that \(\ell \in (0,3 )\), where

$$\begin{aligned} \mathbb{V}_{1} = \bigl\{ k \in \mathbb{N}, \lambda _{k}^{\frac{3-m}{2}} \le \bigl[\alpha (\epsilon )\bigr]^{-\ell } \bigr\} , \quad\quad \mathbb{V}_{2} = \bigl\{ k \in \mathbb{N}, \lambda _{k}^{\frac{3-m}{2}} > \bigl[\alpha (\epsilon )\bigr]^{- \ell } \bigr\} . \end{aligned}$$
(4.25)

In this case, we also continue to divide into two cases as follows:

  1. (a)

    If \(k \in \mathbb{V}_{1}\), one has

    $$\begin{aligned} \bigl\Vert f - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} & \le \frac{{[\alpha (\epsilon )]^{\frac{1}{2}-\ell }} ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{2\gamma L(\gamma ) \vert \varPhi _{0} \vert } \biggl( \frac{1-\gamma }{\gamma } \biggr)^{-1} \\ &\quad{} \times \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{-1} \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )}. \end{aligned}$$
    (4.26)
  2. (b)

    If \(k \in \mathbb{V}_{2}\), using the inequality \(a+b \geq 2\sqrt{ab}\), \(\forall a,b > 0\) gives

    $$\begin{aligned} \bigl\Vert f - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} &\le \frac{{[\alpha (\epsilon )]^{\frac{2\ell (m+1)}{3-m}}} ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{2\gamma L(\gamma ) \vert \varPhi _{0} \vert } \biggl( \frac{1-\gamma }{\gamma } \biggr)^{-1} \\ &\quad{} \times \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{-1} \Vert f \Vert _{\mathbb{H}^{m+1}(\varOmega )}. \end{aligned}$$
    (4.27)

Combining (4.25) to (4.30), we have thus proved

$$\begin{aligned} \bigl\Vert f - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} &\le \bigl({ \bigl[ \alpha (\epsilon )\bigr]^{\frac{1}{2}-\ell }} + {\bigl[\alpha (\epsilon ) \bigr]^{ \frac{2\ell (m+1)}{3-m}}} \bigr)Q_{\gamma }(\lambda _{1},T,E) . \end{aligned}$$
(4.28)

Choosing \(\ell =\frac{3-s}{2(s+5)}\) and from \(\Vert f \Vert _{\mathbb{H}^{s+1}(\varOmega )} \le E\), this implies that

$$\begin{aligned} \bigl\Vert f - f^{\alpha (\epsilon )} \bigr\Vert _{L^{2}(\varOmega )} \le \bigl[ \alpha (\epsilon ) \bigr]^{\frac{m+1}{m+5}}Q_{\gamma }(\lambda _{1},T,E), \end{aligned}$$
(4.29)

where

$$\begin{aligned} &Q_{\gamma } (\lambda _{1},T,E ) \\ &\quad = \frac{ ( \frac{L(\gamma )}{\lambda _{1}} + (1-\gamma ) )^{2}}{2\gamma L(\gamma ) \vert \varPhi _{0} \vert } \biggl( \frac{1-\gamma }{\gamma } \biggr)^{-1} \biggl(1 - E_{\gamma ,1} \biggl( \frac{-\gamma \lambda _{1} T^{\gamma }}{L(\gamma ) + \lambda _{1} (1-\gamma )} \biggr) \biggr)^{-1}E. \end{aligned}$$
(4.30)

 □

Combining (4.15) to (4.17), the proof is completed by showing that

  1. (a)

    If \(0 < m < 3\) and choosing \(\alpha (\epsilon ) = (\frac{\epsilon }{E} )^{\frac{4}{m+1}}\) from (4.18) and (4.29), we get

    $$\begin{aligned} \bigl\Vert f_{\epsilon }^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}(\varOmega )} \text{ is of order } \epsilon ^{\frac{4}{m+5}} . \end{aligned}$$
    (4.31)
  2. (b)

    If \(m \geq 3\) and choosing \(\alpha (\epsilon ) = \frac{\epsilon }{E}\), from (4.18) and (4.29), we get

    $$\begin{aligned}& \bigl\Vert f_{\epsilon }^{\alpha (\epsilon )} - f \bigr\Vert _{L^{2}(\varOmega )} \text{ is of order } \epsilon ^{\frac{1}{2}} . \end{aligned}$$
    (4.32)

 □

5 Simulation example

In this section, we present an example to simulate the theory presented. By choosing \(T = 1\), \(m=1\), and \(\gamma = 0.75\), \(\gamma =0.85\) and \(\gamma =0.95\) are shown in this section, respectively. The computations in this paper are supported by the Matlab codes given by Podlubny [28]. Here, we compute the generalized Mittag-Leffler function with \(P = 10^{-10}\). We consider the problem as follows:

$$\begin{aligned} {}^{\mathrm{ABC}}_{0}D_{t}^{\gamma }u(x,t) - \Delta u(x,t) = \varPhi (t) f(x), \quad (x,t) \in \varOmega \times (0,T), \end{aligned}$$
(5.1)

whereby \({}^{\mathrm{ABC}}_{0} D_{t}^{\gamma }u(x,t)\) is the Atangana–Baleanu fractional derivative.

Let \(\Delta u = \frac{\partial ^{2}}{\partial x^{2}} u \) on the domain \(\varOmega = (0,\pi )\) with the Dirichlet boundary condition such that \(u(0,t) = u(\pi ,t) = 0\), \(t \in (0, 1)\). Then we have the eigenvalues and corresponding eigenvectors: \(\lambda _{k} = k^{2}\), \(k = 1, 2,\ldots \) , and \(\mathrm{e}_{k}(x) = \sqrt{\frac{2}{\pi }}\sin (kx)\), respectively.

In addition, problem (5.1) satisfies the following condition:

$$\begin{aligned} u(x,1) = g(x), \quad x \in (0,\pi ). \end{aligned}$$
(5.2)

Then we have the following solution:

$$\begin{aligned} u(x,t)= t^{3}\sin (2x). \end{aligned}$$
(5.3)

We have

$$\begin{aligned} g(x) = \sqrt{\frac{2}{\pi }} \sin (2x), \quad\quad \varPhi (t) = \biggl(\frac{\varGamma (4)t^{3-\beta }}{\varGamma (4-\beta )} + 4 t^{3} \biggr). \end{aligned}$$
(5.4)

From (5.3) and (5.4), we can find that through some simple transformations

$$\begin{aligned} f(x) = \sin (2x). \end{aligned}$$
(5.5)

The algorithm analysis steps are divided as follows.

Step 1: Considering the domain \((x,t) \in (0,\pi ) \times (0,1)\), we use the following finite difference to discrete the time and spatial variable:

$$\begin{aligned} x_{k} = k \Delta x,\quad 0\le k \le N, \Delta x = \frac{\pi }{N}. \end{aligned}$$

Step 2: The approximated data of \((g,\varPhi )\) is noised by observation data \((g_{\epsilon },\varPhi _{\epsilon })\) as follows:

$$\begin{aligned} \varPhi _{\epsilon } = \varPhi + \frac{1}{\pi }\epsilon \bigl(2 \operatorname{rand} (\cdot )-1\bigr) , \quad\quad g_{\epsilon } = g + \frac{1}{\pi }\epsilon \bigl(2 \operatorname{rand}(\cdot )-1\bigr). \end{aligned}$$
(5.6)

Step 3: The relative error estimation is given by

$$\begin{aligned} \mathit{Error} = \biggl( \frac{\sum_{k=1}^{\mathbf{N}} \Vert f^{\alpha (\epsilon )}_{\epsilon }(x_{k}) - f(x_{k}) \Vert _{L^{2}(0,\pi )}^{2}}{ \sum_{k=1}^{\mathbf{N}} \Vert f(x_{k}) \Vert _{L^{2}(0,\pi )}^{2} } \biggr)^{ \frac{1}{2}}. \end{aligned}$$
(5.7)

In addition, we can choose the regularization parameter \(\alpha (\epsilon ) = \frac{\epsilon }{E}\) for the a priori parameter choice rule, where the value of E plays a role as the a priori condition computed by \(\Vert f \Vert _{\mathbb{H}^{2}(0,\pi )}\). Using the fact that (see [29])

$$\begin{aligned} \int _{0}^{1} x^{\sigma -1} (1 - x)^{\gamma -1} E_{\alpha ,\gamma }\bigl(z(1-x)^{\alpha }\bigr) \,dx = \varGamma (\sigma ) E_{\alpha ,\gamma +\sigma }(z). \end{aligned}$$
(5.8)

From (5.8), by replacing \(\alpha = \gamma \) and \(z = -\frac{\gamma k^{2}}{L(\gamma ) + k^{2}(1-\gamma )}\), we can find

$$\begin{aligned} \int _{0}^{1} x^{\sigma -1} (1-x)^{\gamma -1} E_{\gamma , \gamma } \biggl(\frac{-\gamma k^{2}}{L(\gamma ) + k^{2}(1-\gamma )} \biggr)\,dx = \varGamma (\sigma ) E_{\gamma , \gamma +\sigma } \biggl( \frac{-\gamma k^{2}}{L(\gamma ) + k^{2}(1-\gamma )} \biggr). \end{aligned}$$
(5.9)

From (4.5), we have the following regularized solution with a truncation number N:

$$\begin{aligned} &f^{\alpha (\epsilon )}_{\epsilon }(x) \\ &\quad = \sum_{k=1}^{N} \frac{ [A_{k}(\gamma ) ]^{-1} (\int _{0}^{1}H_{\gamma }(k^{2},s) \varPhi _{\epsilon }(s) \,ds ) }{ (\frac{\epsilon }{E} ) k^{2m+2} + \vert [A_{k}(\gamma ) ]^{-1} ( \int _{0}^{1}H_{\gamma }(k^{2},s) \varPhi _{\epsilon }(s) \,ds ) \vert ^{2}} \bigl\langle g_{\epsilon }(x),\mathrm{e}_{k}(x) \bigr\rangle \mathrm{e}_{k}(x), \end{aligned}$$
(5.10)

in which \(A_{k}(\gamma )\) and \(H_{\gamma }(k^{2},s)\) are defined in Lemma 2.8. From (5.9) and (5.10), we can calculate the integral \(\int _{0}^{1} H_{\gamma }(k^{2},s) \varPhi _{\epsilon }(s) \,ds \) as follows:

$$\begin{aligned} & \int _{0}^{1}H_{\gamma }\bigl(k^{2},s \bigr) \varPhi _{\epsilon }(s) \,ds \\ &\quad = \varGamma (4) E_{\gamma ,4} \biggl(- \frac{\gamma k^{2}}{L(\gamma )+ k^{2}(1-\gamma )} \biggr) + 4 \varGamma (4) E_{ \gamma ,\gamma +4} \biggl( - \frac{\gamma k^{2}}{ L(\gamma ) + k^{2}(1-\gamma )} \biggr) \\ &\quad\quad{} + \frac{1}{\pi }\epsilon \bigl(2\operatorname{rand}(\cdot )-1 \bigr) \frac{L(\gamma ) + k^{2}(1-\gamma ) }{\gamma k^{2}} \biggl(1 - E_{ \gamma ,1} \biggl(\frac{-\gamma k^{2}}{L(\gamma ) + k^{2} (1-\gamma )} \biggr) \biggr). \end{aligned}$$
(5.11)

In these calculations, we choose \(N = 40\). Figure 1 shows the 2D graphs of the source function with the exact data and its approximation for the a priori parameter choice rule with \(\gamma =0.75\) and its error estimates with \(\epsilon = 0.1\), \(\epsilon = 0.01\), and \(\epsilon = 0.001\). Figure 2 shows the 2D graphs of the source function with the exact data and its approximation for the a priori parameter choice rule with \(\gamma =0.85\) and its error estimates with \(\epsilon = 0.1\), \(\epsilon = 0.01\), and \(\epsilon = 0.001\). Figure 3 shows the 2D graphs of the source function with the exact data and its approximation for the a priori parameter choice rule with \(\gamma =0.95\) and its error estimates with \(\epsilon = 0.1\), \(\epsilon = 0.01\), and \(\epsilon = 0.001\), respectively. Table 1 shows the error estimates between the source function with the exact data and the measurement data for the a priori parameter choice rule method with the third cases of γ. From the observations on this table, we can conclude that the approximation result is acceptable. That means the proposed method is effective.

Figure 1
figure 1

Graph of the regularized, exact solutions and the corresponding errors with \(\gamma =0.75\)

Figure 2
figure 2

Graph of the regularized, exact solutions and the corresponding errors with \(\gamma =0.85\)

Figure 3
figure 3

Graph of the regularized, exact solutions and the corresponding errors with \(\gamma =0.95\)

Table 1 The error between the regularized and exact solutions at \(\gamma\in\{0.75, 0.85, 0.95\}\) and \(\epsilon\in\{0.1, 0.01, 0.001\}\)

6 Conclusion

We used the generalized Tikhonov method to regularize the inverse problem to identify an unknown source term for fractional diffusion equations with the Atangana–Baleanu fractional derivative. By giving an example, we showed that this problem is ill-posed (in the sense of Hadamard). In addition, we showed the result for the convergent estimate between the sought solution and the regularized solution under a priori parameter choice rule. Finally, we showed an example to simulate our proposed regularization. In the future work, we will expand the research direction for this type of derivative such as considering the regularity of solutions, continuity according to derivative, results of comparison between the existing derivatives.