1 Introduction

In recent decades, fractional operators have been playing more and more important roles in science and engineering [1], e.g., mechanics, biochemistry, electrical engineering, and medicine, see [28]. These new fractional-order models are more adequate than the integer-order models, because the fractional order derivatives and integrals enable to describe the memory and hereditary properties of different substance [9]. This is the most significant difference of the fractional-order models in comparison with the integer-order models [1014].

The direct problems, i.e., initial value problem and initial boundary value problem for fractional differential equations, have been studied extensively in the past few years [1522]. However, in some practical problems, the boundary data on the whole boundary cannot be obtained. We only know the noisy data on a part of the boundary or at some interior points of the concerned domain, which will lead to some inverse problems, i.e., space-fractional inverse diffusion problems.

The space-fractional diffusion equation arises by replacing the standard space partial derivative in the diffusion equation with a space-fractional partial derivative. It plays important roles in modeling anomalous diffusion and subdiffusion systems, description of a fractional random walk, unification of diffusion, and wave propagation phenomena [23].

In this paper, we consider the following backward problem for the space-fractional diffusion equation:

$$ \textstyle\begin{cases} u_{t}(x,t)= {}_{x}D^{\alpha}_{\theta}u(x,t), &x\in\mathbb{R}, t\in (0,T), \\ u(x,T)=g(x), &x\in\mathbb{R}, \\ u(x,t)|_{x\rightarrow\pm\infty}=0, &t\in(0,T), \end{cases} $$
(1.1)

where the space-fractional derivative \({}_{x}D^{\alpha}_{\theta}\) is the Riesz–Feller fractional derivative of order α (\(0<\alpha\leq2\)) and skewness θ (\(|\theta|\leq\min\{\alpha,2-\alpha\}\), \(\theta\neq\pm1\)) defined by

$$\begin{aligned}& {}_{x}D^{\alpha}_{\theta}f(x)=\frac{\Gamma(1+\alpha)}{\pi} \biggl\{ \sin \frac{(\alpha+\theta)\pi}{2} \int^{\infty}_{0}\frac{f(x+\tau)-f(x)}{\tau^{1+\alpha}}\,d\tau \\& \hphantom{{}_{x}D^{\alpha}_{\theta}f(x)={}} {}+\sin\frac{(\alpha-\theta)\pi}{2} \int^{\infty}_{0}\frac{f(x-\tau )-f(x)}{\tau^{1+\alpha}}\,d\tau \biggr\} ,\quad 0< \alpha< 2, \\& {}_{x}D^{2}_{0}f(x)=\frac{d^{2}f(x)}{dx^{2}}, \quad \alpha=2, \end{aligned}$$
(1.2)

where \(\Gamma(\cdot)\) is the Gamma function, and the Fourier transform is also defined in [24] as follows:

$$ \mathcal{F} \bigl\{ {}_{x}D^{\alpha}_{\theta}f(x); \xi \bigr\} =-\psi^{\theta}_{\alpha }(\xi)\hat{f}(\xi), $$
(1.3)

where

$$ \psi^{\theta}_{\alpha}(\xi)=|\xi|^{\alpha}e^{i(\operatorname{sign}(\xi ))\theta\pi/2}. $$
(1.4)

Our backward problem is to find an approximation to the temperature \(u(x,t)\) for \(0\leq t< T\) from the measurement \(u(x,T)=g^{\delta}(x)\) which is a noise-contaminated function for the exact temperature \(g(x)\) and satisfies

$$ \bigl\Vert g^{\delta}(x)-g(x) \bigr\Vert \leq\delta, $$
(1.5)

where \(\|\cdot\|\) denotes to the \(L^{2}\)-norm, and the constant \(\delta >0\) represents the noise level. This problem is ill-posed, i.e., the solution does not depend continuously on the given measurable data. Therefore, some effective regularization methods to deal with this problem are needed.

Actually, there are a couple of previous studies trying to solve the above problem. The authors of [25] proposed a Fourier method and a convolution method and gave the convergence estimate. In [26], Zhang and Wei developed an optimal modified method to solve this problem. In [27], the authors applied a simplified Tikhonov regularization method to solve this problem. Cheng et al. [28] considered a new iteration regularization method to deal with this problem. In 2015, Shi et al. [29] studied this problem and discussed a new a posteriori parameter choice strategy for the convolution regularization method to solve this problem. However, the main goal in the current work is providing a modified kernel method to solve the space-fractional backward diffusion problem (1.1).

The idea of the construction of regularization method by modified ‘kernel’ comes from [30], in which the authors found that as long as the kernel function satisfies a certain property, some new regularization methods can be constructed. Before this, the idea has also appeared in [31], in which the problem of high-order numerical differentiation was successfully solved. Then, the modified ‘kernel’ idea has been used for solving various types of ill-posed problems [3234]. In order to examine whether a regularization method is optimal, we will give the optimal error bound for the problem under a source condition.

The outline of the paper is as follows. In Sect. 2, we present an analysis on the ill-posedness of the space-fractional backward diffusion problem (1.1). In Sect. 3, we give some preliminary results. Then we apply these results to problem (1.1) and give the optimal error bound. In Sect. 4, we make a description for our regularization method and prove the convergence estimates under a priori and a posteriori parameter choice rule. Finally, a short summary of the conclusions of this paper is given in Sect. 5.

2 Ill-posedness of the problem

Let \(\hat{g}(\xi)\) denote the Fourier transform of the function \(g(x)\), which is defined by

$$ \hat{g}(\xi)=\frac{1}{\sqrt{2\pi}} \int^{\infty}_{-\infty }g(x)e^{-i\xi x}\,dx. $$

Taking the Fourier transform to problem (1.1) with respect to x, it is easy to see that

$$ \textstyle\begin{cases} \hat{u}_{t}(\xi,t)=-\psi^{\theta}_{\alpha}(\xi)\hat{u}(\xi,t), &\xi\in\mathbb{R}, t\in(0,T), \\ \hat{u}(\xi,T)=\hat{g}(\xi), &\xi\in\mathbb{R}. \end{cases} $$
(2.1)

The solution of this problem is given by

$$ \hat{u}(\xi,t)=e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}\hat{g}(\xi). $$
(2.2)

Taking the inverse Fourier transform, we get

$$ u(x,t)=\frac{1}{\sqrt{2\pi}} \int^{\infty}_{-\infty}e^{i\xi x}e^{\psi ^{\theta}_{\alpha}(\xi)(T-t)} \hat{g}(\xi)\,d\xi. $$
(2.3)

Note that \(\psi^{\theta}_{\alpha}(\xi)\) has a positive real part \(|\xi |^{\alpha}\cos{\frac{\theta\pi}{2}}\), the small error in the high-frequency components will be amplified by the factor \(e^{|\xi|\cos {\frac{\theta\pi}{2}}(T-t)}\) for \(0\leq t< T\) as \(|\xi|\rightarrow \infty\). Thus the space-fractional backward diffusion problem is ill-posed. To solve this problem, we will construct a new regularization method by the modified ‘kernel’ idea.

3 Preliminary results and optimal error bound for problem (1.1)

3.1 Preliminary

Let \(X,Y\) be infinite dimensional Hilbert spaces and \(F: X\rightarrow Y\) be a linear injective bounded operator between X and Y with non-closed range \(R(F)\) of F.

Consider the following inverse problem [35]:

$$ Fx=y. $$
(3.1)

We assume that \(y^{\delta}\in Y\) is available noisy data and satisfies \(\|y^{\delta}-y\|\leq\delta\). Any operator \(R: Y\rightarrow X\) can be considered as a special method for solving (3.1), and the approximate solution of (3.1) is given by \(Ry^{\delta}\).

Let \(M\subset X\) be a bounded set. Let us introduce the worst case error \(\Delta(\delta, R)\) for identifying x from \(y^{\delta}\) as [35]

$$ \Delta(\delta, R):=\sup \bigl\{ \bigl\Vert Ry^{\delta}-x \bigr\Vert |x\in M, y^{\delta}\in Y, \bigl\Vert Fx-y^{\delta} \bigr\Vert \leq\delta \bigr\} . $$
(3.2)

The best possible error bound (or optimal error bound) is defined as the infimum over all mappings \(R: Y\rightarrow X\):

$$ \omega(\delta):=\inf_{R}\Delta(\delta, R). $$
(3.3)

Now let us review some optimality results if the set \(M=M_{\varphi,E}\) is a set of elements which satisfy some source condition, i.e.,

$$ M_{\varphi,E}= \bigl\{ x\in X|x= \bigl[\varphi \bigl(F^{*}F \bigr) \bigr]^{\frac{1}{2}}v, \|v\|\leq E \bigr\} , $$
(3.4)

where the operator function \(\varphi(F^{*}F)\) is well defined via spectral representation [35]

$$ \varphi \bigl(F^{*}F \bigr)= \int^{a}_{0}\varphi(\lambda)\, d E_{\lambda}, $$
(3.5)

where \(F^{*}F=\int^{a}_{0}\varphi(\lambda)\, d E_{\lambda}\) is the spectral decomposition of \(F^{*}F\), \(\{E_{\lambda}\}\) denotes the spectral family of the operator \(F^{*}F\), and a is a constant such that \(\|F^{*}F\|\leq a\) with \(a=\infty\) if \(F^{*}F\) is unbounded. In the case when \(F: L^{2}(\mathbb{R})\rightarrow L^{2}(\mathbb{R})\) is a multiplication operator, \(Fx(s)=r(s)x(s)\), the operator function \(\varphi(F^{*}F)\) has the form

$$ \varphi \bigl(F^{*}F \bigr)x(s)=\varphi \bigl( \bigl\vert r(s) \bigr\vert ^{2} \bigr)x(s). $$
(3.6)

Then a method \(R_{0}\) is called [35]

  1. (I)

    optimal on the set \(M_{p,E}\) if \(\Delta(\delta,R_{0})=\omega(\delta ,E)\) holds;

  2. (II)

    order optimal on the set \(M_{p,E}\) if \(\Delta(\delta,R_{0})\leq l \omega(\delta,E)\) with \(l\geq1\) holds.

In order to derive an explicit optimal error bound for the worst case error \(\Delta(\delta,R)\) defined in (3.2), we assume that the function φ in (3.6) satisfies the following assumption.

Assumption 3.1

([35])

The function \(\varphi(\lambda): (0,a]\rightarrow(0,\infty)\) in (3.6) is continuous and has the following properties:

  1. (I)

    \(\lim_{\lambda\rightarrow0}\varphi(\lambda)=0\);

  2. (II)

    φ is strictly monotonically increasing on \((0,a]\);

  3. (III)

    \(\rho(\lambda)=\lambda\varphi^{-1}(\lambda):(0,\varphi (a)]\rightarrow(0,a\varphi(a)]\) is convex.

Under Assumption 3.1, the next theorem gives us a general formula for the optimal error bound.

Theorem 3.2

([35])

Let \(M_{\varphi,E}\) be given by (3.4), let Assumption 3.1 be satisfied, and let \(\frac{\delta^{2}}{E^{2}}\in\sigma(F^{*}F\varphi(F^{*}F))\), where \(\sigma(F^{*}F)\) denotes the spectrum of the operator \(F^{*}F\), then

$$ \omega(\delta,E)=E\sqrt{\rho^{-1} \biggl( \frac{\delta^{2}}{E^{2}} \biggr)}. $$
(3.7)

3.2 Optimal error bound for problem (1.1)

In this section, we consider the model problem (1.1). We actually have the measured data function \(g^{\delta}(x)\in L^{2}(\mathbb{R})\) and the exact data function \(g(x)\in L^{2}(\mathbb{R})\), which satisfies (1.5). To obtain the convergence order between the regularization solution and the exact solution, we need to give a priori bound

$$ \bigl\Vert u(x,0) \bigr\Vert \leq E, $$
(3.8)

where E is a constant. Now, let us formulate problem (1.1) as an operator equation

$$ Pu=g , $$
(3.9)

with linear operator \(P\in\mathcal{L}(L^{2}(\mathbb{R}),L^{2}(\mathbb {R}))\). Obviously, from Sect. 2, we can know that equation (3.9) is equivalent to the operator equation in the frequency space

$$ F\hat{u}=\hat{g}. $$
(3.10)

From (2.2), we obtain

$$ F:=e^{\psi^{\theta}_{\alpha}(\xi)(t-T)}, $$
(3.11)

where \(F:L^{2}(\mathbb{R})\rightarrow L^{2}(\mathbb{R})\) is a linear, self-adjoint, and bounded multiplication operator. By elementary calculations, we can easily obtain

$$ F^{*}=\overline{e^{\psi^{\theta}_{\alpha}(\xi)(t-T)}} $$
(3.12)

and

$$ FF^{*}=F^{*}F=e^{2|\xi|^{\alpha}\cos\frac{\theta\pi}{2}(t-T)}. $$
(3.13)

Let \(Z=e^{-t\psi^{\theta}_{\alpha}(\xi)}\). Then \(Z^{*}=\overline {e^{-t\psi^{\theta}_{\alpha}(\xi)}}\) and \(ZZ^{*}=Z^{*}Z=e^{-2t|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}\). According to \(\|ZU\|=\|(Z^{*}Z)^{\frac{1}{2}}U\|\) and (3.8), the general source set (3.4) is equivalent to the following set:

$$\begin{aligned} M_{\varphi,E}&= \bigl\{ \hat{u}(\xi,t)\in L^{2}(\mathbb{R})|\hat {u}(\xi,t)=e^{-t\psi^{\theta}_{\alpha}(\xi)}\hat{u}(\xi,0) \\ &\quad := \bigl[\varphi \bigl(F^{*}F \bigr) \bigr]^{\frac{1}{2}}\hat{u}( \xi,0), \bigl\Vert u(\xi,0) \bigr\Vert \leq E \bigr\} . \end{aligned}$$
(3.14)

Hence, we can obtain

$$ ZZ^{*}=\varphi \bigl(F^{*}F \bigr)=e^{-2t|\xi|^{\alpha}\cos\frac{\theta\pi }{2}}= \bigl(e^{2|\xi|^{\alpha}\cos\frac{\theta\pi}{2}(t-T)} \bigr)^{\frac{t}{T-t}}. $$
(3.15)

The function \(\varphi(\lambda)\) in (3.14) is as follows:

$$ \varphi(\lambda)=\lambda^{\frac{t}{T-t}},\quad 0< t< T. $$
(3.16)

It is easy to see that \(\varphi(\lambda):(0,\infty)\rightarrow(0,\infty )\) is continuous.

We will discuss the properties of the function \(\varphi(\lambda)\), which is given in Assumption 3.1.

  1. (I)

    It is obvious that \(\lim_{\lambda\rightarrow0}\varphi(\lambda)=0\) holds;

  2. (II)

    Because of \(\varphi'(\lambda)=\frac{t}{T-t}\lambda^{\frac {2t-T}{T-t}}>0\), \(0< t< T\), we know \(\varphi(\lambda)\) is strictly monotonically increasing in \((0,\infty)\);

  3. (III)

    From (3.16), we can obtain \(\varphi^{-1}(\lambda)=\lambda ^{\frac{T-t}{t}}\) and \(\rho(\lambda)=\lambda\varphi^{-1}(\lambda)=\lambda^{\frac{T}{t}}\). By elementary calculations, we obtain \(\rho''(\lambda)=\frac{T^{2}-Tt}{t^{2}}\lambda^{\frac {T-2t}{t}}>0\). Consequently, the function \(\rho(\lambda)\) is strictly convex, and

    $$ \rho^{-1}(\lambda)=\lambda^{\frac{t}{T}}. $$
    (3.17)

Theorem 3.3

Assume that (1.5) and a priori condition (3.8) hold. Then, for problem (1.1), there holds the optimal error bound

$$ \omega(\delta,E)=\delta^{\frac{t}{T}}E^{1-\frac{t}{T}}. $$
(3.18)

Proof

From (3.13) and (3.15), we know

$$ \sigma \bigl(F^{*}F\varphi \bigl(F^{*}F \bigr) \bigr)=(0,1). $$

Therefore, for small δ,

$$ \frac{\delta^{2}}{E^{2}}\in\sigma \bigl(F^{*}F\varphi \bigl(F^{*}F \bigr) \bigr). $$

Combining (3.7) and (3.17), it follows that

$$ \omega(\delta,E)=E\sqrt{\rho^{-1} \biggl(\frac{\delta^{2}}{E^{2}} \biggr)}= E\sqrt{ \biggl(\frac{\delta^{2}}{E^{2}} \biggr)^{\frac{t}{T}}}=\delta^{\frac {t}{T}}E^{1-\frac{t}{T}}. $$

 □

4 A modified kernel method and convergence estimates

In this paper, we will present a modified kernel method for constructing a stable approximate solution of problem (1.1). The regularized solution in the frequency domain is given by

$$ \hat{u}^{\delta}_{\beta}(\xi,t)=\frac{e^{\psi^{\theta}_{\alpha}(\xi )(T-t)}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \hat{g}^{\delta}(\xi),\quad \frac{1}{2}\leq\gamma< \frac{2}{T}, $$
(4.1)

where \(\beta>0\) is the regularization parameter. The proposed regularized solution (4.1) can be interpreted as replacing the arbitrarily large kernel \(e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}\) by the modified kernel

$$ \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}. $$

The kernel has the following two common properties:

  1. (I)

    If the parameter β is small, then for small \(|\xi|\), the kernel \(\frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\) is close to the exact kernel \(e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}\);

  2. (II)

    If β is fixed, the kernel \(\frac{e^{\psi^{\theta}_{\alpha }(\xi)(T-t)}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\) is bounded.

Property (I) describes that, for the appropriately chosen parameter β, the regularized kernel reserves the information of the exact kernel in the components of small \(|\xi|\). This reserved information guarantees the possibility of the regularized solution to approximate the exact one. Property (II) describes the degree of continuous dependence, i.e., when the regularized kernel is bounded, the regularized solution will depend continuously on the data. Both (I) and (II) guarantee that the regularized solution (4.1) is dependent continuously on the data and is the approximation of the exact solution.

In the following, to establish the convergence estimates between the regularization solution and the exact solution, we give two auxiliary lemmas which can be easily proved.

Lemma 4.1

If the constants \(\nu>0\), \(b>a>0\), the following inequality holds for the variable \(s>0\):

$$ \frac{s^{a}}{1+\nu s^{b}}\leq\frac{b-a}{b} \biggl( \frac{a}{b-a} \biggr)^{\frac{a}{b}}\nu^{-\frac{a}{b}}. $$
(4.2)

Proof

Let \(f(s)=\frac{s^{a}}{1+\nu s^{b}}\). It is easy to find \(s=s^{\ast }= (\frac{a}{\nu(b-a)} )^{\frac{1}{b}}\) such that \(f'(s)=0\), so the function \(f(s)\) attains its maximum at \(s^{\ast}\). Then we obtain \(f(s)\leq f(s^{\ast})=\frac{b-a}{b} (\frac{a}{b-a} )^{\frac {a}{b}}\nu^{-\frac{a}{b}}\). □

Lemma 4.2

([36])

Let \(0< m\leq n\), then the following inequality holds:

$$ \sup_{\eta\geq0}\frac{e^{\eta m}}{1+\nu e^{\eta n}}\leq \nu^{-\frac{m}{n}}. $$
(4.3)

4.1 A priori selection rule

In this subsection, under a priori choice of regularization parameter, we give the convergence estimate between the exact solution and its regularized solution for problem (1.1).

Theorem 4.3

Suppose that \(u^{\delta}_{\beta}(x,t)\) is the regularized solution with noisy data \(g^{\delta}(x)\) and that \(u(x,t)\) is the exact solution with the exact data \(g(x)\). Let assumptions (1.5) and (3.8) be satisfied. If we choose

$$ \beta= \biggl(\frac{\delta}{E} \biggr)^{2 \gamma}, $$
(4.4)

then for every \(t\in(0,T)\), there holds the error estimate

$$ \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert \leq K\delta^{\frac {t}{T}}E^{1-\frac{t}{T}}, $$
(4.5)

where \(K=K_{1}+K_{2}\), \(K_{1}=\frac{(2\gamma-1)T+t}{2\gamma T} (\frac {T-t}{(2\gamma-1)T+t} )^{\frac{T-t}{2\gamma T}}\), \(K_{2}=\frac {t}{2\gamma T} (\frac{2\gamma T-t}{t} )^{\frac{2\gamma T-t}{2\gamma T}}\).

Proof

Due to the Parseval identity and the triangle inequality, we know

$$\begin{aligned} \begin{aligned}[b] \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert &= \bigl\Vert \hat{u}^{\delta }_{\beta}(\cdot,t) -\hat{u}(\cdot,t) \bigr\Vert \\ &= \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,t)- \hat{u}_{\beta}(\cdot ,t)+\hat{u}_{\beta}(\cdot,t) -\hat{u}( \cdot,t) \bigr\Vert \\ &\leq \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,t)- \hat{u}_{\beta}(\cdot ,t) \bigr\Vert + \bigl\Vert \hat{u}_{\beta}(\cdot,t) -\hat{u}(\cdot,t) \bigr\Vert \\ &=I_{1}+I_{2}, \end{aligned} \end{aligned}$$
(4.6)

where

$$\begin{aligned}& I_{1}= \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,t)- \hat{u}_{\beta}(\cdot ,t) \bigr\Vert , \\& I_{2}= \bigl\Vert \hat{u}_{\beta}(\cdot,t)-\hat{u}(\cdot,t) \bigr\Vert . \end{aligned}$$

We first estimate the first term \(I_{1}\) on the right-hand side of (4.6), we have

$$\begin{aligned} I_{1}&= \biggl\Vert \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta |e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta|e^{T\psi^{\theta }_{\alpha}(\xi)}|^{2\gamma}}\hat{g}(\xi) \biggr\Vert \\ &= \biggl\Vert \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta|e^{T\psi ^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(\hat{g}^{\delta}(\xi )-\hat{g}( \xi) \bigr) \biggr\Vert \\ &\leq\delta\sup_{\xi\in\mathbb{R}}\frac{e^{(T-t)|\xi|^{\alpha}\cos \frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac {\theta\pi}{2}}}. \end{aligned}$$
(4.7)

Let

$$ A(\xi):=\frac{e^{(T-t)|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. $$
(4.8)

Here, we set \(s:=e^{|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}\), and then \(A(\xi)\) can be written as

$$ A(s):=\frac{s^{(T-t)}}{1+\beta s^{2\gamma T}}. $$
(4.9)

From Lemma 4.1, we get

$$ I_{1}\leq K_{1}\delta \beta^{-\frac{T-t}{2\gamma T}}, $$
(4.10)

where \(K_{1}=\frac{(2\gamma-1)T+t}{2\gamma T} (\frac{T-t}{(2\gamma -1)T+t} )^{\frac{T-t}{2\gamma T}}\).

Now we estimate the second term \(I_{2}\) on the right-hand side of (4.6), we have

$$\begin{aligned} I_{2}&= \biggl\Vert \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta |e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}(\xi)- e^{\psi^{\theta}_{\alpha}(\xi)(T-t)} \hat{g}(\xi) \biggr\Vert \\ &= \biggl\Vert \biggl(\frac{1}{1+\beta|e^{T\psi^{\theta}_{\alpha}(\xi )}|^{2\gamma}}-1 \biggr) e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}\hat{g}( \xi) \biggr\Vert \\ &= \biggl\Vert \frac{\beta|e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma }}{1+\beta|e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} e^{-t \psi^{\theta}_{\alpha}(\xi)}e^{T \psi^{\theta}_{\alpha}(\xi )}\hat{g}(\xi) \biggr\Vert \\ &\leq E\sup_{\xi\in\mathbb{R}} \biggl\vert \frac{\beta|e^{T\psi^{\theta }_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T\psi^{\theta}_{\alpha}(\xi )}|^{2\gamma}} e^{-t \psi^{\theta}_{\alpha}(\xi)} \biggr\vert \\ &=\beta E \sup_{\xi\in\mathbb{R}}\frac{e^{(2\gamma T-t) |\xi|^{\alpha }\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac {\theta\pi}{2}}}. \end{aligned}$$
(4.11)

Let

$$ B(\xi):=\frac{e^{(2\gamma T-t) |\xi|^{\alpha}\cos\frac{\theta\pi }{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. $$
(4.12)

Here, we also set \(s:=e^{|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}\), and then \(B(\xi)\) can be written as

$$ B(s):=\frac{s^{(2\gamma T-t)}}{1+\beta s^{2\gamma T}}. $$
(4.13)

From Lemma 4.1, we obtain

$$ I_{2}\leq K_{2}\beta^{\frac{t}{2\gamma T}}E, $$
(4.14)

where \(K_{2}=\frac{t}{2\gamma T} (\frac{2\gamma T-t}{t} )^{\frac {2\gamma T-t}{2\gamma T}}\).

Summarizing (4.4), (4.6), (4.10), and (4.14), we complete the estimate of (4.5)

$$ \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert \leq K\delta^{\frac {t}{T}}E^{1-\frac{t}{T}}, $$

where \(K=K_{1}+K_{2}\). □

Remark 4.4

From Theorem 3.3 and Theorem 4.3, we know that the modified kernel method for problem (1.1) is order optimal.

Remark 4.5

The error estimate in Theorem 4.3 does not give any useful information on the continuous dependence of the solution at \(t=0\). To retain the continuous dependence of the solution at \(t=0\), one has to introduce a stronger a priori assumption as follows:

$$ \bigl\Vert u(\cdot,0) \bigr\Vert _{H^{p}}\leq E,\quad p>0, $$
(4.15)

where \(\|u(\cdot,0)\|_{H^{p}}\) denotes the norm in the Sobolev space \(H^{p}(\mathbb{ R})\) defined by

$$ \bigl\Vert u(\cdot,0) \bigr\Vert _{H^{p}}:= \biggl( \int^{\infty}_{-\infty} \bigl\vert \hat{u}(\xi ,0) \bigr\vert ^{2} \bigl(1+\xi^{2} \bigr)^{p}\,d\xi \biggr)^{\frac{1}{2}}. $$
(4.16)

Theorem 4.6

Suppose that \(u^{\delta}_{\beta}(x,t)\) is the regularized solution with noisy data \(g^{\delta}(x)\) and that \(u(x,t)\) is the exact solution with the exact data \(g(x)\). Let assumptions (1.5) and (4.15) be satisfied. If we choose

$$ \beta= \biggl(\frac{\delta}{E} \biggr)^{\gamma}, $$
(4.17)

then for \(t=0\), there holds the estimate

$$ \bigl\Vert u^{\delta}_{\beta}(\cdot,0)-u(\cdot,0) \bigr\Vert \leq K_{3}\delta^{\frac {1}{2}}E^{\frac{1}{2}}+ E\max \biggl\{ \biggl(\frac{\delta}{E} \biggr)^{\gamma-\frac{1}{2}\gamma ^{2}T}, \biggl( \frac{\gamma}{4}\ln\frac{E}{\delta} \biggr)^{-\frac{p}{\alpha }} \biggr\} , $$
(4.18)

where

$$ K_{3}= \textstyle\begin{cases} \frac{2\gamma-1}{2\gamma} (\frac{1}{2\gamma-1} )^{\frac{1}{2\gamma }}, & \frac{1}{2}< \gamma< \frac{2}{T}, \\ 1, & \gamma=\frac{1}{2}. \end{cases} $$

Proof

It is similar to Theorem 4.3, we obtain

$$\begin{aligned} \bigl\Vert u^{\delta}_{\beta}(\cdot,0)-u(\cdot,0) \bigr\Vert &= \bigl\Vert \hat{u}^{\delta }_{\beta}(\cdot,0) -\hat{u}(\cdot,0) \bigr\Vert \\ &= \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,0)- \hat{u}_{\beta}(\cdot ,0)+\hat{u}_{\beta}(\cdot,0) -\hat{u}( \cdot,0) \bigr\Vert \\ &\leq \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,0)- \hat{u}_{\beta}(\cdot ,0) \bigr\Vert + \bigl\Vert \hat{u}_{\beta}(\cdot,0) -\hat{u}(\cdot,0) \bigr\Vert \\ &=I_{3}+I_{4}. \end{aligned}$$
(4.19)

For estimating \(I_{3}\), we set \(s:=e^{|\xi|^{\alpha}\cos\frac{\theta\pi }{2}}\), and using Lemma 4.1, we can get

$$ I_{3}\leq K_{3}\delta \beta^{-\frac{1}{2\gamma}}, $$
(4.20)

where

$$ K_{3}= \textstyle\begin{cases} \frac{2\gamma-1}{2\gamma} (\frac{1}{2\gamma-1} )^{\frac{1}{2\gamma }}, & \frac{1}{2}< \gamma< \frac{2}{T}, \\ 1, & \gamma=\frac{1}{2}. \end{cases} $$

For estimating \(I_{4}\), we have

$$\begin{aligned} I_{4}&= \biggl\Vert \frac{e^{\psi^{\theta}_{\alpha}(\xi)T}}{1+\beta|e^{T\psi ^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}(\xi)- e^{\psi^{\theta}_{\alpha}(\xi)T}\hat{g}(\xi) \biggr\Vert \\ &= \biggl\Vert \biggl(\frac{1}{1+\beta|e^{T\psi^{\theta}_{\alpha}(\xi )}|^{2\gamma}}-1 \biggr) e^{\psi^{\theta}_{\alpha}(\xi)T}\hat{g}( \xi) \biggr\Vert \\ &= \biggl\Vert \frac{\beta|e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma }}{1+\beta|e^{T\psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(1+\xi^{2} \bigr)^{-\frac{p}{2}} \bigl(1+\xi^{2} \bigr)^{\frac{p}{2}}e^{T \psi^{\theta }_{\alpha}(\xi)} \hat{g}(\xi) \biggr\Vert \\ &\leq E\sup_{\xi\in\mathbb{R}}\frac{\beta|e^{T\psi^{\theta}_{\alpha }(\xi)}|^{2\gamma}}{1+\beta(1+\xi^{2})^{\frac{p}{2}} |e^{T\psi^{\theta }_{\alpha}(\xi)}|^{2\gamma}} \\ &=E \sup_{\xi\in\mathbb{R}}\frac{\beta e^{2\gamma T|\xi|^{\alpha}\cos \frac{\theta\pi}{2}}}{1+\beta(1+\xi^{2})^{\frac{p}{2}} e^{2\gamma T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. \end{aligned}$$
(4.21)

Now, we distinguish two cases to estimate (4.21).

Case 1. For \(|\xi|^{\alpha}\leq\frac{1}{2}\ln\frac{1}{\sqrt{\beta }}\), we have

$$ \frac{\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi }{2}}}{1+\beta(1+\xi^{2})^{\frac{p}{2}} e^{2\gamma T |\xi|^{\alpha}\cos \frac{\theta\pi}{2}}} \leq\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi}{2}} \leq\beta e^{2\gamma T|\xi|^{\alpha}} \leq\beta^{1-\frac{\gamma T}{2}}. $$
(4.22)

Case 2. For \(|\xi|^{\alpha}\geq\frac{1}{2}\ln\frac{1}{\sqrt{\beta }}\), we obtain

$$\begin{aligned} \frac{\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi }{2}}}{1+\beta(1+\xi^{2})^{\frac{p}{2}} e^{2\gamma T |\xi|^{\alpha}\cos \frac{\theta\pi}{2}}} &\leq\frac{\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi }{2}}}{\beta(1+\xi^{2})^{\frac{p}{2}} e^{2\gamma T |\xi|^{\alpha}\cos \frac{\theta\pi}{2}}} \leq \bigl(1+\xi^{2} \bigr)^{-\frac{p}{2}} \\ &\leq|\xi|^{-p} = \bigl( \vert \xi \vert ^{\alpha} \bigr)^{-\frac{p}{\alpha}} \leq \biggl(\frac{1}{2}\ln\frac{1}{\sqrt{\beta}} \biggr)^{-\frac{p}{\alpha}}. \end{aligned}$$
(4.23)

Combining (4.21), (4.22), and (4.23), we know

$$ I_{4}\leq E\max \biggl\{ \beta^{1-\frac{\gamma T}{2}}, \biggl( \frac{1}{2}\ln \frac{1}{\sqrt{\beta}} \biggr)^{-\frac{p}{\alpha}} \biggr\} . $$
(4.24)

Summarizing (4.17), (4.19), (4.20), and (4.24), we can easily get the convergence estimate

$$ \bigl\Vert u^{\delta}_{\beta}(\cdot,0)-u(\cdot,0) \bigr\Vert \leq K_{3}\delta^{\frac {1}{2}}E^{\frac{1}{2}}+ E\max \biggl\{ \biggl(\frac{\delta}{E} \biggr)^{\gamma-\frac{1}{2}\gamma ^{2}T}, \biggl(\frac{\gamma}{4}\ln \frac{E}{\delta} \biggr)^{-\frac{p}{\alpha }} \biggr\} . $$

The theorem is proved. □

4.2 A posteriori selection rule

In this subsection, we give the convergence estimate for \(\|u^{\delta }_{\beta}(\cdot,t)-u(\cdot,t)\|\) by using an a posteriori choice rule for the regularization parameter, i.e., Morozov’s discrepancy principle.

According to Morozov’s discrepancy principle [37], we adopt the regularization parameter β as the solution of the equation

$$ \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \hat{g}^{\delta}(\xi) \biggr\Vert =\tau\delta. $$
(4.25)

Here, \(\tau>1\) is a constant.

Lemma 4.7

Let \(\rho(\beta):= \|\frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )-\hat{g}^{\delta}(\xi) \|\), then the following results hold:

  1. (a)

    \(\rho(\beta)\) is a continuous function;

  2. (b)

    \(\lim_{\beta\rightarrow0}\rho(\beta)=0\);

  3. (c)

    \(\lim_{\beta\rightarrow\infty}\rho(\beta)=\|\hat{g}^{\delta }(\xi)\|\);

  4. (d)

    \(\rho(\beta)\) is a strictly increasing function for \(\beta\in (0,\infty)\).

The proof is obvious and we omit it here.

Remark 4.8

According to Lemma 4.7, we find that, if \(0<\tau\delta<\| \hat{g}^{\delta}(\xi)\|\), Equation (4.25) has a unique solution.

Lemma 4.9

If β is the solution of Equation (4.25), then the following inequality holds:

$$ \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \hat{g}(\xi) \biggr\Vert \leq(\tau+1) \delta. $$
(4.26)

Proof

Due to the triangle inequality and Equation (4.25), there holds

$$\begin{aligned}& \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )-\hat{g}(\xi) \biggr\Vert \\& \quad = \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \hat{g}^{\delta}(\xi)+ \hat{g}^{\delta}(\xi)-\hat{g}(\xi) \biggr\Vert \\& \quad \leq \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \hat{g}^{\delta}(\xi) \biggr\Vert + \bigl\Vert \hat{g}^{\delta}( \xi)-\hat{g}(\xi) \bigr\Vert \\& \quad \leq (\tau+1)\delta. \end{aligned}$$

 □

Lemma 4.10

If β is the solution of Equation (4.25), the following inequality also holds:

$$ \beta^{-\frac{1}{2\gamma}}\leq\frac{K_{4} E}{(\tau-1)\delta}, $$
(4.27)

where \(K_{4}=\frac{1}{2\gamma}(2\gamma-1)^{1-\frac{1}{2\gamma}}\).

Proof

Due to the triangle inequality and Equation (4.25), there holds

$$\begin{aligned} \tau\delta =& \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )- \hat{g}^{\delta}(\xi) \biggr\Vert \\ =& \biggl\Vert \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi ) \biggr\Vert \\ =& \biggl\Vert \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(\hat{g}^{\delta}(\xi )-\hat{g}( \xi) \bigr)+ \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}(\xi) \biggr\Vert \\ \leq& \biggl\Vert \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(\hat{g}^{\delta}(\xi )- \hat{g}(\xi) \bigr) \biggr\Vert + \biggl\Vert \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}(\xi) \biggr\Vert \\ \leq& \delta+ \biggl\Vert \frac{\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma-1}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}e^{T \psi^{\theta}_{\alpha}(\xi)}\hat{g}(\xi) \biggr\Vert \\ \leq& \delta+\beta E \sup_{\xi\in\mathbb{R}}\frac{|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma-1}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \\ \leq& \delta+\beta E \sup_{\xi\in\mathbb{R}}\frac{e^{(2\gamma-1)T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. \end{aligned}$$
(4.28)

Let

$$ C(\xi):=\frac{e^{(2\gamma-1)T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T |\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. $$
(4.29)

Here, we set \(s:=e^{|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}\), then \(C(\xi)\) can be written as

$$ C(s):=\frac{s^{(2\gamma-1)T}}{1+\beta s^{2\gamma T}}. $$
(4.30)

From Lemma 4.1, we obtain

$$ C(s)\leq K_{4}\beta^{\frac{1}{2\gamma}-1}, $$
(4.31)

where \(K_{4}=\frac{1}{2\gamma}(2\gamma-1)^{1-\frac{1}{2\gamma}}\).

Summarizing (4.28) and (4.31), we have

$$ \tau\delta\leq\delta+K_{4}\beta^{\frac{1}{2\gamma}}E. $$
(4.32)

Then (4.27) could be obtained. The lemma is proved. □

Now we give the main result of this subsection.

Theorem 4.11

Suppose that a priori condition \(\|u(\cdot,0)\|\leq E\) and the noise assumption (1.5) hold, and there exists \(\tau>1\) such that \(0<\tau\delta<\|\hat{g}^{\delta}\|\). The regularization parameter \(\beta>0\) is chosen by Morozov’s discrepancy principle (4.25). Then we have the following convergence estimate:

$$ \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert \leq \biggl(\frac {K_{4}K_{5}}{(\tau-1)}+1 \biggr)^{(1-t/T)}( \tau+1)^{t/T}\delta^{t/T}E^{(1-t/T)}, $$
(4.33)

where

$$ K_{5}= \textstyle\begin{cases} \frac{2\gamma-1}{2\gamma}(\frac{1}{2\gamma-1})^{\frac{1}{2\gamma}}, & \frac{1}{2}< \gamma< \frac{2}{T}, \\ 1, & \gamma=\frac{1}{2}. \end{cases} $$

Proof

Due to the Parseval formula and Lemma 4.9, we obtain

$$\begin{aligned}& \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert ^{2} \\& \quad = \bigl\Vert \hat{u}^{\delta}_{\beta}(\cdot,t)-\hat{u}( \cdot,t) \bigr\Vert ^{2} \\& \quad = \biggl\Vert \frac{e^{\psi^{\theta}_{\alpha}(\xi)(T-t)}}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi )-e^{\psi^{\theta}_{\alpha}(\xi)(T-t)} \hat{g}(\xi) \biggr\Vert ^{2} \\& \quad = \biggl\Vert e^{\psi^{\theta}_{\alpha}(\xi)(T-t)} \biggl(\frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr) \biggr\Vert ^{2} \\& \quad = \int^{\infty}_{-\infty} \biggl\vert e^{\psi^{\theta}_{\alpha}(\xi )(T-t)} \biggl( \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr) \biggr\vert ^{2}\,d\xi \\& \quad = \int^{\infty}_{-\infty} \bigl\vert e^{\psi^{\theta}_{\alpha}(\xi )(T-t)} \bigr|^{2} \biggl| \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)} \vert ^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr|^{2}\,d\xi \\& \quad = \int^{\infty}_{-\infty} \bigl\vert e^{\psi^{\theta}_{\alpha}(\xi )(T-t)} \bigr\vert ^{2} \biggl\vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\vert ^{2(1-t/T)} \\& \qquad {}\cdot \biggl\vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\vert ^{2t/T}\,d\xi \\& \quad \leq \biggl( \int^{\infty}_{-\infty} \biggl( \bigl\vert e^{\psi^{\theta}_{\alpha}(\xi )(T-t)} \bigr\vert ^{2} \biggl\vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)} \vert ^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi)\biggr|^{2(1-t/T)} \biggr)^{\frac{T}{T-t}}\,d\xi \biggr)^{\frac {T-t}{T}} \\& \qquad {}\cdot \biggl( \int^{\infty}_{-\infty} \biggl( \biggl\vert \frac{1}{1+\beta \vert e^{T \psi^{\theta}_{\alpha}(\xi)} \vert ^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\vert ^{2t/T} \biggr)^{\frac{T}{t}}\,d\xi \biggr)^{\frac {t}{T}} \\& \quad \leq \biggl( \int^{\infty}_{-\infty} \bigl\vert e^{T\psi^{\theta}_{\alpha}(\xi )} \bigr\vert \biggl\vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\vert ^{2}\,d\xi \biggr)^{\frac{T-t}{T}} \\& \qquad {} \cdot \biggl( \int^{\infty}_{-\infty} \biggl\vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\vert ^{2}\,d\xi \biggr)^{\frac{t}{T}} \\& \quad = \biggl\Vert \bigl\vert e^{T\psi^{\theta}_{\alpha}(\xi)} \bigr\vert \biggl( \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr) \biggr\Vert ^{2(1-t/T)} \\& \qquad {}\cdot \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\Vert ^{2t/T} \\& \quad = \biggl\Vert \biggl(\frac{|e^{T\psi^{\theta}_{\alpha}(\xi)}|}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(\hat{g}^{\delta}(\xi )-\hat{g}(\xi) \bigr)+ \biggl(\frac{|e^{T\psi^{\theta}_{\alpha}(\xi )}|}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}- \bigl\vert e^{T\psi^{\theta}_{\alpha }(\xi)} \bigr\vert \biggr) \hat{g}(\xi) \biggr) \biggr\Vert ^{2(1-t/T)} \\& \qquad {}\cdot \biggl\Vert \frac{1}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}\hat{g}^{\delta}(\xi)- \hat{g}(\xi) \biggr\Vert ^{2t/T} \\& \quad \leq \biggl( \biggl\Vert \frac{|e^{T\psi^{\theta}_{\alpha}(\xi)}|}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}} \bigl(\hat{g}^{\delta}( \xi )-\hat{g}(\xi) \bigr) \biggr\Vert \\& \qquad {}+ \biggl\Vert \biggl( \frac{|e^{T\psi^{\theta}_{\alpha}(\xi)}|}{1+\beta|e^{T \psi^{\theta}_{\alpha}(\xi)}|^{2\gamma}}- \bigl\vert e^{T\psi^{\theta}_{\alpha }(\xi)} \bigr\vert \biggr) \hat{g}( \xi) \biggr\Vert \biggr)^{2(1-t/T)} \cdot \bigl((\tau+1)\delta \bigr)^{2t/T} \\& \quad \leq \biggl(\delta\sup_{\xi\in\mathbb{R}}\frac{e^{T|\xi|^{\alpha }\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}} +E \biggr)^{2(1-t/T)}\cdot \bigl((\tau+1)\delta \bigr)^{2t/T}. \end{aligned}$$
(4.34)

Let

$$ D(\xi):=\frac{e^{T|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}{1+\beta e^{2\gamma T|\xi|^{\alpha}\cos\frac{\theta\pi}{2}}}. $$
(4.35)

Now, we will distinguish two cases to estimate (4.35).

Case 1. For \(\frac{1}{2}<\gamma<\frac{2}{T}\), we set \(s:=e^{|\xi |^{\alpha}\cos\frac{\theta\pi}{2}}\), and using Lemma 4.1, we have

$$ D(s):=\frac{s^{T}}{1+\beta s^{2\gamma T}}\leq K_{5} \beta^{-\frac{1}{2\gamma}}, $$
(4.36)

where \(K_{5}=\frac{2\gamma-1}{2\gamma}(\frac{1}{2\gamma-1})^{\frac {1}{2\gamma}}\).

Case 2. For \(\gamma=\frac{1}{2}\), we set \(p=|\xi|^{\alpha}\cos\frac {\theta\pi}{2}\), and using Lemma 4.2, we obtain

$$ D(p)=\frac{e^{pT}}{1+\beta e^{pT}}\leq\beta^{-1}. $$
(4.37)

Combining (4.34), (4.35), (4.36), (4.37), and Lemma 4.10, we can get

$$\begin{aligned} \bigl\Vert u^{\delta}_{\beta}(\cdot,t)-u(\cdot,t) \bigr\Vert ^{2}&\leq \bigl(\delta K_{5}\beta^{- \frac{1}{2\gamma}}+E \bigr)^{2(1-t/T)} \bigl((\tau+1)\delta \bigr)^{2t/T} \\ &\leq \biggl(\delta K_{5} \frac{K_{4}E}{(\tau-1)\delta}+E \biggr)^{2(1-t/T)} \bigl((\tau+1)\delta \bigr)^{2t/T} \\ &= \biggl(\frac{K_{4}K_{5}}{(\tau-1)}+1 \biggr)^{2(1-t/T)}E^{2(1-t/T)} \bigl(( \tau +1)\delta \bigr)^{2t/T} \\ &= \biggl(\frac{K_{4}K_{5}}{(\tau-1)}+1 \biggr)^{2(1-t/T)}(\tau +1)^{2t/T} \delta^{2t/T}E^{2(1-t/T)}. \end{aligned}$$
(4.38)

Then the conclusion of the theorem can be obtained directly from (4.38). □

5 Conclusions

In this paper, a space-fractional backward diffusion problem has been considered. We have analyzed the optimal error bound for the problem under a source condition and have solved it by a regularization method for overcoming its ill-posedness based on the idea of modified ‘kernel’. The convergence results are obtained under a priori and a posteriori regularization parameter choice rule, respectively.