1 Introduction

The modified Helmholtz equation or the Yukawa equation which is pointed out in [1] appears in implicit matching schemes for the heat equation, in Debye-Huckel theory, and in the linearization of the Poisson-Boltzmann equation. The underlying free-space Green’s function is usually referred to as the Yukawa potential in nuclear physics. In physics, chemistry, and biology, when Coulomb forces are damped by screening effects, this Green’s function is also known as the screened Coulomb potential. Especially in microstretch elastic materials [2] and in the thermoelastodynamics of microstretch bodies [3, 4], the modified Helmholtz equation has been applied widely. In recent years, there are much research results on the inverse problem of the modified Helmholtz equation. The Cauchy problems associated with the modified Helmholtz equation have been studied by using different numerical methods, such as the Landweber method with boundary element method and the conjugate gradient method [5], the method of fundamental solutions (MFS) [6], the iteration regularization method [7], Tikhonov type regularization [8], Quasi-reversibility and truncation methods [911], Quasi-boundary regularization [12, 13], and so on. Inverse source problems arise in many branches of science and engineering, for example, heat conduction, crack identification, electromagnetic theory, geophysical prospecting, and pollutant detection. For the heat source identification, there have been a large number of research results for different forms of heat sources [1422], and so on. But to the best of the authors’ knowledge, there were few papers for identifying the unknown source on the modified Helmholtz equation. In [23, 24], the authors used the quasi-reversibility method to identify the unknown source of the modified Helmholtz equation on half strip region and half infinite region. In [25], the authors used the simplified Tikhonov regularization method to identify the unknown source for the modified Helmholtz equation. In [26, 27], the authors used the Tikhonov regularization method to identify the unknown source for the modified Helmholtz equation. But in [2327], the regularization parameters are selected by the a priori rule. There is a defect for any a priori method, i.e., the a priori choice of the regularization parameter depends on the a priori bound E of the unknown solution. However, the a priori bound E cannot be known exactly in practice, and using a wrong constant E may lead to a badly regularized solution.

In this paper, we use the Landweber iterative method to identify the unknown source of the modified Helmholtz equation. The Landweber iterative method is a very popular algorithm and regularization method in inverse problem research. In [28], the authors used the method of iterative method to solve linear inverse problems. Under the posterior regularization parameter choice rule, the convergence order of the regularization solution is obtained. In [29], the authors used the iterative method to solve a nonlinear problems. We not only give the a priori choice of the regularization parameter, but also give the a posteriori choice of the regularization parameter which depends only on the measurable data. Moreover, we compare the effectiveness between the a posteriori choice rule and the a priori choice rule. To the best of the authors’ knowledge, there are few papers choosing the regularization parameter by the a posteriori rule for this problem. We consider the modified Helmholtz equation as follows:

$$ \textstyle\begin{cases} \triangle u(x,y)-k^{2}u(x,y)=f(x), & 0< x\leq \pi, 0< y< +\infty, \\ u(0,y)=u(\pi,y)=0, & 0\leq y< +\infty, \\ u(x,0)=0,\qquad u(x,y)\vert _{y\rightarrow \infty }\quad \mbox{is bounded}, & 0\leq x\leq \pi, \\ u(x,1)=g(x), & 0\leq x\leq \pi, \end{cases} $$
(1.1)

where \(g(x)\) is given function in \(L^{2}(0, \pi)\), and \(f(x)\) is an unknown source. The constant k is the wave number. We use the additional condition \(u(x,1)=g(x)\) to determine the unknown source \(f(x)\). Physically, \(g(x)\) can be measured, there will be measurement errors, and we assume the function \(g^{\delta }(x)\in L^{2}(0, \pi)\) as the measurable data which satisfies

$$ \bigl\Vert g-g^{\delta }\bigr\Vert _{L^{2}(0, \pi)}\le \delta, $$
(1.2)

where the constant \(\delta >0\) represents a bound on the measurement error, \(\Vert \cdot \Vert \) is \(L^{2}(0,\pi)\) norm and δ is a noise level.

The remainder of the paper is organized as follows. In Section 2, we formulate some preliminary results. In Section 3, we present the Landweber iterative regularization method. The convergence estimates under an a priori and an a posteriori choice rules will be given in Section 4. Numerical examples are given to show the effectiveness of our method in Section 5. We give a brief conclusion in Section 6.

2 Some auxiliary results

In this section, we give some auxiliary results, which are very useful for our main conclusion.

Lemma 2.1

For \(0\leq h\leq 1\) and \(n\geq 1\), the following inequalities hold:

$$\begin{aligned}& \begin{aligned}&(1-h)^{n}h\leq \frac{1}{n+1}, \\ &\frac{1-(1-h)^{n}}{h}\leq n. \end{aligned} \end{aligned}$$
(2.1)

Proof

Denote \(\rho (h)=(1-h)^{n}h\) and \(\beta (h)=1-(1-h)^{n}-nh\), then

$$\begin{aligned}& \rho '(h)=-n(1-h)^{n-1}h+(1-h)^{n}, \\& \rho '(h)=-n(1-h)^{n-1}h+(1-h)^{n}, \\& \beta '(h)=n(1-h)^{n-1}-n. \end{aligned}$$

Setting \(\rho '(h)=0\), we have

$$h=\frac{1}{n+1}. $$

Note that \(\rho (0)=\rho (1)=0\), \(\rho (h)\) has a unique maximal value at \(h=\frac{1}{n+1}\). Therefore,

$$\rho (h)=(1-h)^{n}h\leq \frac{1}{n+1}. $$

Because \(\beta '(h)\leq 0\) and \(\beta (0)=0\), we can easily obtain \(\beta (h)\leq 0\).

This ends the proof of the Lemma 2.1. □

Lemma 2.2

For \(0< h\leq 1\), \(0<\alpha \leq 1\) and \(n\geq 1\), Lemma  2.1 can be strengthened as the following inequalities:

$$\begin{aligned}& (1-h)^{n}h^{\alpha }\leq {(n+1)}^{-\alpha }, \\& \frac{1-(1-h)^{n}}{h^{\alpha }}\leq n^{\alpha }. \end{aligned}$$

Proof

In fact, using the established results (2.1), we can get

$$\begin{aligned}& (1-h)^{n}h^{\alpha }\leq \bigl[(1-h)^{n}h \bigr]^{\alpha }\leq {(n+1)}^{-\alpha }, \\& \frac{1-(1-h)^{n}}{h^{\alpha }}\leq \biggl[\frac{1-(1-h)^{n}}{h}\biggr]^{\alpha } \leq n^{\alpha }. \end{aligned}$$

 □

Lemma 2.3

For \(t\geq 1\), we have

$$ \frac{1}{1-e^{-\sqrt{t}}}\leq 2. $$
(2.2)

Lemma 2.4

For \(t\geq 1\), we have

$$ \frac{1}{2t}\leq \frac{1-e^{-\sqrt{t}}}{t}\leq \frac{1}{t}. $$
(2.3)

3 Landweber iterative regularization method

By the method of separation of variables, the solution of problem (1.1) is given by

$$u(x,y)=-\sum_{n=1}^{\infty } \frac{1-e^{-\sqrt{n^{2}+k^{2}}y}}{n^{2}+k ^{2}}f_{n}X_{n}, $$

where \(f_{n}=(f,X_{n})\) and \(X_{n}=\sqrt{\frac{2}{\pi }}\sin(nx)\) (\(n=1,2, \ldots\)) is a set of standard orthogonal bases on the space interval \(L^{2}(0,\pi)\).

Using \(u(x,1)=g(x)\), we obtain

$$ g(x)=-\sum_{n=1}^{\infty }\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k ^{2}}f_{n}X_{n}= \sum_{n=1}^{\infty }g_{n}X_{n}, $$
(3.1)

where \(g_{n}=(g,X_{n})\).

Define operator \(K: f\rightarrow g\), then

$$ g(x)=Kf(x)=-\sum_{n=1}^{\infty } \frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n ^{2}+k^{2}}f_{n}X_{n}. $$
(3.2)

The operator K is a linear self adjoint compact operator, and its singular value is

$$ \mu_{n}=-\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}}; $$
(3.3)

we have

$$ g_{n}=-f_{n}\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}}. $$
(3.4)

Hence we obtain

$$ f_{n}=-g_{n}\frac{n^{2}+k^{2}}{1-e^{-\sqrt{n^{2}+k^{2}}}} $$
(3.5)

and

$$ f(x)=\sum_{n=1}^{\infty }f_{n}X_{n}=- \sum_{n=1}^{\infty }\frac{n^{2}+k ^{2}}{1-e^{-\sqrt{n^{2}+k^{2}}}}g_{n}X_{n}= \sum_{n=1}^{\infty }\mu _{n}^{-1}g_{n}X_{n}. $$
(3.6)

When \(n\rightarrow \infty \), \(\mu_{n}^{-1}=o(n^{2})\), the exact data function \(g(x)\) must be at least two times lower, but the \(g^{\delta }(x)\), which belongs to \(L^{2}(0, \pi)\), does not necessarily have the same negative power drop of 2. So problem (1.1) is ill-posed. Assume for the unknown source \(f(x)\) there exists an a priori bound as follows:

$$ \bigl\Vert f(\cdot)\bigr\Vert _{p}\leq E,\quad p>0, $$
(3.7)

where \(E>0\) is a constant and \(\Vert f(\cdot)\Vert _{p}\) is defined as follows:

$$ \bigl\Vert f(\cdot)\bigr\Vert _{p}=\Biggl(\sum _{n=1}^{\infty }\bigl(n^{2}+k^{2} \bigr)^{p}\bigl\vert (f,X_{n})\bigr\vert ^{2} \Biggr)^{ \frac{1}{2}}. $$
(3.8)

Theorem 3.1

If \(\Vert f(\cdot)\Vert _{p}\leq E\), \(p>0\), then

$$ \bigl\Vert f(\cdot)\bigr\Vert \leq C_{1}E^{\frac{2}{p+2}}\Vert g\Vert ^{\frac{p}{p+2}}, $$
(3.9)

where \(C_{1}:=2^{\frac{p}{p+2}}\).

Proof

From (3.5), (3.8), Lemma 2.3 and the Hölder inequality, we get

$$\begin{aligned} \bigl\Vert f(\cdot)\bigr\Vert ^{2} =& \Biggl\Vert \sum _{n=1}^{\infty }f_{n}\Biggr\Vert ^{2} \\ =& \sum_{n=1}^{\infty }g_{n}^{2} \biggl(\frac{n^{2}+k^{2}}{1-e^{-\sqrt{n ^{2}+k^{2}}}}\biggr)^{2} \\ =& \sum_{n=1}^{\infty }g_{n}^{\frac{4}{p+2}} \biggl(\frac{n^{2}+k^{2}}{1-e ^{-\sqrt{n^{2}+k^{2}}}}\biggr)^{2}g_{n}^{\frac{2p}{p+2}}. \\ \leq &\Biggl(\sum_{n=1}^{\infty }g_{n}^{2} \biggl(\frac{n^{2}+k^{2}}{1-e^{-\sqrt{n ^{2}+k^{2}}}}\biggr)^{p+2}\Biggr)^{\frac{2}{p+2}}\Biggl(\sum _{n=1}^{\infty }g_{n} \Biggr)^{ \frac{2p}{p+2}} \\ \leq & \Biggl(\sum_{n=1}^{\infty }f_{n}^{2} \bigl(n^{2}+k^{2}\bigr)^{p}\biggl( \frac{1}{1-e ^{-\sqrt{n^{2}+k^{2}}}}\biggr)^{p}\Biggr)^{\frac{2}{p+2}}\Biggl(\sum _{n=1}^{\infty }g _{n}\Biggr)^{\frac{2p}{p+2}} \\ \leq & C^{2}_{1}E^{\frac{2}{p+2}}\Vert g\Vert ^{\frac{2p}{p+2}}, \end{aligned}$$

where \(C_{1}:=2^{\frac{p}{p+2}}\).

We complete the proof of Theorem 3.1. □

Now, we use the Landweber iterative method to obtain the regularization solution for (1.1) and rewrite the equation \(Kf=g\) in the form \(f=(I-aK^{*}K)f+aK^{*}g\) for some \(a>0\). Iterate this equation, i.e.,

$$ f^{0}(x):=0, \qquad f^{m}(x)=\bigl(I-aK^{*}K \bigr)f^{m-1}(x)+aK^{*}g(x),\quad m=1,2,3,\ldots, $$
(3.10)

where m is iterative step number, and is selected regularization parameter. a is called relaxation factor, and satisfies \(0< a<\frac{1}{ \Vert K\Vert ^{2}}\). For K being a self adjoint operator, we obtain

$$ f^{m,\delta }(x)=a\sum_{k=0}^{m-1} \bigl(I-aK^{2}\bigr)^{k}Kg^{\delta }(x). $$
(3.11)

Using (3.3), we get

$$ f^{m,\delta }(x)=R_{m}g^{\delta }(x)=\sum _{n=1}^{\infty }\frac{1-(1-a \mu_{n}^{2})^{m}}{\mu_{n}}g_{n}^{\delta }X_{n}(x), $$
(3.12)

where \(g_{n}^{\delta }=(g^{\delta },X_{n}(x))\).

4 Error estimate under two parameters choice rules

In this section, we will give two convergence estimates under an a priori regularization parameter choice rule and an a posteriori regularization parameter choice rule, respectively.

4.1 An a priori regularization parameter choice rule

Theorem 4.1

Let \(f(x)\) given by (3.6) be the exact solution of problem (1.1). Let \(f^{m,\delta }(x)\) given by (3.12) be the regularization Landweber iterative approximation solution. Choosing the regularization parameter \(m=[z]\), where

$$ z=\biggl(\frac{E}{\delta }\biggr)^{\frac{4}{p+2}}, $$
(4.1)

then we obtain the following error estimate:

$$ \bigl\Vert f^{m,\delta }(\cdot)-f(\cdot)\bigr\Vert \leq C_{2}E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}, $$
(4.2)

where \([z]\) denotes the largest integer less than or equal to z, and \(C_{2}=\sqrt{a}+(\frac{4p}{a})^{\frac{p}{4}}\) is a constant dependent on a, p.

Proof

Due to the triangle inequality, we know

$$\begin{aligned}& \bigl\Vert f^{m,\delta }(\cdot)-f(\cdot)\bigr\Vert \\& \quad =\Biggl\Vert \sum_{n=1}^{\infty }\frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}^{\delta }X_{n}(x)- \sum_{n=1}^{\infty}\mu_{n}^{-1}g_{n}X_{n}(x) \Biggr\Vert \\& \quad \leq \Biggl\Vert \sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}^{\delta }X_{n}(x)-\sum _{n=1}^{\infty }\frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}X_{n}(x) \Biggr\Vert \\& \qquad {}+\Biggl\Vert \sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}X_{n}(x)-\sum _{n=1}^{\infty }\mu_{n}^{-1}g_{n}X_{n}(x) \Biggr\Vert \\& \quad =\bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert + \bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert . \end{aligned}$$

From condition (1.2) and (3.12), we have

$$\begin{aligned}& \bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert ^{2} \\& \quad =\Biggl\Vert \sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}^{\delta }X_{n}(x)-\sum _{n=1}^{\infty}\frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}X_{n}(x) \Biggr\Vert ^{2} \\& \quad =\Biggl\Vert \sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}\bigl(g_{n}^{\delta }-g_{n} \bigr)X_{n}(x)\Biggr\Vert ^{2} \\& \quad \leq \sup_{n\geq 1}H(n)^{2}\delta^{2}, \end{aligned}$$

where \(H(n):=\frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}\).

By Lemma 2.1, we get

$$ \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}\leq \sqrt{am}, $$
(4.3)

i.e.,

$$ H(n)\leq \sqrt{am}. $$
(4.4)

So

$$ \bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert \leq \sqrt{am}\delta. $$
(4.5)

On the other hand, using (3.7), we have

$$\begin{aligned} \bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert ^{2} =& \Biggl\Vert \sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}X_{n}(x)-\sum _{n=1}^{\infty }\mu_{n}^{-1}g_{n}X_{n}(x) \Biggr\Vert ^{2} \\ =&\Biggl\Vert \sum_{n=1}^{\infty } \frac{(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g_{n}X_{n}(x)\Biggr\Vert ^{2} \\ =&\Biggl\Vert \sum_{n=1}^{\infty }\bigl(1-a \mu_{n}^{2}\bigr)^{m}\bigl(n^{2}+k^{2} \bigr)^{-p/2}\bigl(n^{2}+k^{2}\bigr)^{p/2}f_{n}X_{n}(x) \Biggr\Vert ^{2} \\ \leq &\sup_{n\geq 1}Q(n)^{2}E^{2}, \end{aligned}$$

where \(Q(n):=(1-a(-\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}})^{2})^{m}(n ^{2}+k^{2})^{-\frac{p}{2}}\).

Using Lemma 2.2 and Lemma 2.4, we have

$$ Q(n)\leq \biggl(1-\frac{a}{4(n^{2}+k^{2})^{2}}\biggr)^{m}\bigl(n^{2}+k^{2} \bigr)^{- \frac{p}{2}}. $$
(4.6)

Let \(t:=n^{2}+k^{2}\),

$$ F(t):=\biggl(1-\frac{a}{(2t)^{2}}\biggr)^{m}t^{-\frac{p}{2}}. $$
(4.7)

Let \(t_{0}\) satisfy \(F'(t_{0})=0\), and we easily get

$$ t_{0}=\biggl(\frac{a(4m+p)}{4p}\biggr)^{\frac{1}{2}}. $$
(4.8)

Thus

$$\begin{aligned} F( t_{0}) =&\biggl(1-\frac{p}{4m+p}\biggr)^{m}\biggl( \frac{a(4m+p)}{4p}\biggr)^{-\frac{p}{4}} \\ \leq & \biggl(\frac{4p}{a(m+1)}\biggr)^{\frac{p}{4}}, \end{aligned}$$
(4.9)

i.e.,

$$ F(t)\leq \biggl(\frac{4p}{a}\biggr)^{\frac{p}{4}}(m+1)^{-\frac{p}{4}}. $$
(4.10)

Thus we obtain

$$ Q(n)\leq \biggl(\frac{4p}{a}\biggr)^{\frac{p}{4}}(m+1)^{-\frac{p}{4}}. $$
(4.11)

Hence

$$ \bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert \leq \biggl( \frac{4p}{a}\biggr)^{\frac{p}{4}}(m+1)^{- \frac{p}{4}}E. $$
(4.12)

Combining (4.5) and (3.12), we choose \(m=[z]\), and get

$$ \bigl\Vert f^{m,\delta }(\cdot)-f(\cdot)\bigr\Vert \leq C_{2}E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}, $$
(4.13)

where \(C_{2}:=\sqrt{a}+(\frac{4p}{a})^{\frac{p}{4}}\).

We complete the proof of Theorem 4.1. □

4.2 An a posteriori regularization parameter choice rule

We consider the a posteriori regularization parameter choice in the Morozov discrepancy, and construct regularization solution sequences \(f^{m,\delta }(x)\) by Landweber iterative method. Assume \(\tau >1\) is a given fixed constant. Stop the algorithm at the first occurrence of \(m=m(\delta)\in \mathbb{N}_{0}\) with

$$ \bigl\Vert Kf^{m,\delta }(\cdot)-g^{\delta }(\cdot)\bigr\Vert \leq \tau \delta, $$
(4.14)

where \(\Vert g^{\delta }\Vert \geq \tau \delta\).

Lemma 4.2

Let \(\gamma (m)=\Vert Kf^{m,\delta }(\cdot)-g^{\delta }(\cdot)\Vert \), then we have the following conclusions:

  1. (a)

    \(\gamma (m)\) is a continuous function;

  2. (b)

    \(\lim_{m\rightarrow 0}\gamma (m)=\Vert g^{\delta }\Vert \);

  3. (c)

    \(\lim_{m\rightarrow +\infty }\gamma (m)=0\);

  4. (d)

    \(\gamma (m)\) is a strictly decreasing function, for any \(m\in (0,+\infty)\).

Lemma 4.3

For fixed \(\tau >1\), and Landweber’s iteration method with stopping rule (4.14), we can see that the regularization parameter \(m=m(\delta,g^{\delta })\in \mathbb{N}_{0}\) satisfies

$$ m\leq \biggl(\frac{2(p+2)}{a}\biggr) \biggl(\frac{E}{(\tau -1)\delta } \biggr)^{\frac{4}{p+2}}. $$
(4.15)

Proof

From (3.12), we have the representation

$$ R_{m}g=\sum_{n=1}^{\infty } \frac{1-(1-a\mu_{n}^{2})^{m}}{\mu_{n}}g _{n}X_{n}(x) $$
(4.16)

for every \(g\in H^{2}(\Omega)\) and thus

$$\begin{aligned} \Vert KR_{m}g-g\Vert ^{2} =&\Biggl\Vert \sum _{n=1}^{\infty }\bigl(1-\bigl(1-a \mu_{n}^{2}\bigr)^{m}\bigr)g_{n}X_{n}(x)- \sum_{n=1}^{\infty }g_{n}X_{n}(x) \Biggr\Vert ^{2} \\ =&\Biggl\Vert \sum_{n=1}^{\infty }-\bigl(1-a \mu_{n}^{2}\bigr)^{m}g_{n}X_{n}(x) \Biggr\Vert ^{2} \\ =&\sum_{n=1}^{\infty }\bigl(1-a \mu_{n}^{2}\bigr)^{2m}g_{n}^{2}. \end{aligned}$$

From \(\vert 1-a\mu_{n}^{2}\vert <1\), we conclude that \(\Vert KR_{m-1}-I\Vert \leq 1\).

We know that m is the minimum value that satisfies \(\Vert KR_{m}g^{\delta }-g^{\delta }\Vert =\Vert Kf^{m,\delta }-g^{\delta }\Vert \leq \tau \delta \). Hence

$$\begin{aligned} \Vert KR_{m-1}g-g\Vert \geq &\bigl\Vert KR_{m-1}g^{\delta }-g^{\delta } \bigr\Vert -\bigl\Vert (KR_{m-1}-I) \bigl(g-g^{\delta }\bigr) \bigr\Vert \\ \geq &\tau \delta -\Vert KR_{m-1}-I\Vert \delta \\ \geq &(\tau -1)\delta. \end{aligned}$$

On the other hand, using (3.7), we obtain

$$\begin{aligned} \Vert KR_{m-1}g-g\Vert =&\Biggl\Vert \sum _{n=1}^{\infty }\bigl(1-\bigl(1-a\mu_{n}^{2} \bigr)^{m-1}\bigr)g_{n}X_{n}(x)-\sum _{n=1}^{\infty }g_{n}X_{n}(x)\Biggr\Vert \\ =&\Biggl\Vert \sum_{n=1}^{\infty }-\bigl(1-a \mu_{n}^{2}\bigr)^{m-1}(g,X_{n})\Biggr\Vert \\ =&\Biggl\Vert \sum_{n=1}^{\infty }\bigl(1-a \mu_{n}^{2}\bigr)^{m-1}\mu_{n} \bigl(n^{2}+k^{2}\bigr)^{-\frac{p}{2}}f_{n} \bigl(n^{2}+k^{2}\bigr)^{\frac{p}{2}}X_{n}(x)\Biggr\Vert \\ \leq &\sup_{n\geq 1}\bigl\vert \bigl(1-a\mu_{n}^{2} \bigr)^{m-1}\mu_{n}\bigl(n^{2}+k^{2} \bigr)^{-\frac{p}{2}}\bigr\vert E. \end{aligned}$$

Let

$$ S(n):=\bigl(1-a\mu_{n}^{2}\bigr)^{m-1} \mu_{n}\bigl(n^{2}+k^{2}\bigr)^{-\frac{p}{2}}, $$
(4.17)

so

$$ (\tau -1)\delta \leq S(n)E. $$
(4.18)

Using Lemma 2.4, we have

$$ S(n)\leq \biggl(1-a\biggl(\frac{1}{2t}\biggr)^{2} \biggr)^{m-1}t^{-\frac{p}{2}-1}. $$
(4.19)

Let

$$ G(t)=\biggl(1-a\biggl(\frac{1}{2t}\biggr)^{2} \biggr)^{m-1}t^{-\frac{p}{2}-1}, $$
(4.20)

suppose \(t_{\ast }\) satisfy \(G'(t_{\ast })=0\), we easily get

$$ t_{\ast }=\biggl(\frac{a(4m+p-2)}{4(p+2)}\biggr)^{\frac{1}{2}}, $$
(4.21)

so

$$\begin{aligned} G(t_{\ast }) =&\biggl(1-\frac{4(p+2)}{4m+p-2}\biggr)^{m-1}\biggl( \frac{a(4m+p-2)}{4(p+2)}\biggr)^{-\frac{p+2}{4}} \\ \leq &\biggl(\frac{2(p+2)}{ma}\biggr)^{\frac{p+2}{4}}. \end{aligned}$$

Combining (4.18) with (4.21), we obtain (4.15). □

Theorem 4.4

Let \(f(x)\) given by (3.6) be the exact solution of problem (1.1). Let \(f^{m,\delta }(x)\) given by (3.12) be the Landweber iterative regularization approximation solution. The regularization parameter is given by (4.14), then we have the following error estimate:

$$ \bigl\Vert f^{m,\delta }(\cdot)-f(\cdot)\bigr\Vert \leq \bigl(C_{1}(\tau +1)^{ \frac{2}{p+2}}+C_{3} \bigr)E^{\frac{2}{p+2}}\delta^{\frac{p}{p+2}}, $$
(4.22)

where \(C_{3}=(2(p+2))(\frac{1}{\tau -1})^{\frac{2}{p+2}}\) is a constant.

Proof

Using triangle inequality, we obtain

$$ \bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert \leq \bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert +\bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert . $$
(4.23)

By (4.5) and Lemma 4.3, we get

$$\begin{aligned} \bigl\Vert f^{m,\delta }(\cdot)-f^{m}(\cdot)\bigr\Vert \leq& \sqrt{am}\delta \\ \leq& C _{4}E^{\frac{2}{p+2}}\delta^{\frac{p}{p+2}}, \end{aligned}$$
(4.24)

where \(C_{3}=(2(p+2))^{\frac{1}{2}}(\frac{1}{\tau -1})^{\frac{2}{p+2}}\).

For the second part of the right side of (4.24), we have

$$\begin{aligned}& K\bigl(f^{m}(\cdot)-f(\cdot)\bigr) \\& \quad =\sum _{n=1}^{\infty }-\bigl(1-a\mu_{n}^{2} \bigr)^{m}g _{n}X_{n}(x) \\& \quad =\sum_{n=1}^{\infty }-\bigl(1-a \mu_{n}^{2}\bigr)^{m}\bigl(g_{n}-g_{n}^{\delta } \bigr)X _{n}(x)+\sum_{n=1}^{\infty }- \bigl(1-a\mu_{n}^{2}\bigr)^{m}g_{n}^{\delta }X_{n}(x). \end{aligned}$$

Using (1.2) and (4.15), we have

$$ \bigl\Vert K\bigl(f^{m}(\cdot)-f(\cdot)\bigr)\bigr\Vert \leq ( \tau +1)\delta. $$
(4.25)

Due to

$$\begin{aligned} \bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert _{H^{p}} =& \Biggl(\sum_{n=1}^{\infty }-\bigl(1-a\mu _{n}^{2}\bigr)^{2m}f_{n}^{2} \bigl(n^{2}+k^{2}\bigr)^{p}\Biggr)^{\frac{1}{2}} \\ \leq &\Biggl(\sum_{n=1}^{\infty }f_{n}^{2} \bigl(n^{2}+k^{2}\bigr)^{p}\Biggr)^{\frac{1}{2}} \\ \leq & E, \end{aligned}$$

and using Theorem 3.1, we have

$$ \bigl\Vert f^{m}(\cdot)-f(\cdot)\bigr\Vert \leq C_{1}(\tau +1)^{\frac{p}{p+2}}E^{ \frac{2}{p+2}}\delta^{\frac{p}{p+2}}. $$
(4.26)

Therefore,

$$ \bigl\Vert f^{m,\delta }(\cdot)-f(\cdot)\bigr\Vert \leq \bigl(C_{1}(\tau +1)^{ \frac{p}{p+2}}+C_{3} \bigr)E^{\frac{2}{p+2}}\delta^{\frac{p}{p+2}}. $$
(4.27)

This completes the proof of Theorem 4.4. □

5 Numerical implementation and numerical examples

In this section, we present numerical results for examples cases to show the effectiveness of our proposed method.

By (3.2), we have

$$\begin{aligned} K\bigl(f(x)\bigr) =&-\sum_{n=1}^{\infty } \frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n ^{2}+k^{2}}(f,X_{n})X_{n} \\ =&-\sum_{n=1}^{\infty } \frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k ^{2}} \int_{0}^{\pi }f(s)\sqrt{\frac{2}{\pi }}\sin(ns)\,ds \sqrt{\frac{2}{ \pi }}\sin(nx) \\ =&- \int_{0}^{\pi }\frac{2}{\pi }\sum _{n=1}^{\infty }\frac{1-e^{-\sqrt{n ^{2}+k^{2}}}}{n^{2}+k^{2}}f(s)\sin(ns)\sin(nx)\,ds \\ =&g(x). \end{aligned}$$
(5.1)

We use the former N sum to approximate the infinite sum as follows:

$$ -\sum_{n=1}^{\infty }\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}}f(s)\sin(ns)\sin(nx) \approx -\sum_{n=1}^{N} \frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}}f(s)\sin(ns)\sin(nx). $$
(5.2)

Let \(x_{i}=\frac{(i-1)\pi }{M}\), \(i=1,2,\ldots, M+1\). The integral is discretized by the Simpson formula, then (5.1) can be divided into

$$ Kf(x_{j})=-\frac{2}{\pi }\sum_{i=1}^{M+1} \sum_{n=1}^{N}\frac{1-e^{-\sqrt{n ^{2}+k^{2}}}}{n^{2}+k^{2}}f(x_{i})\sin(nx_{i})\sin(nx_{j}) \omega_{i} \pi. $$
(5.3)

So

$$ -2\sum_{i=1}^{M+1}\sum _{n=1}^{N}\frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n ^{2}+k^{2}}f(x_{i})\sin(nx_{i})\sin(nx_{j}) \omega_{i}=g(x_{j}), $$
(5.4)

where

$$ w_{i}= \textstyle\begin{cases} \frac{1}{3M}, & i=1,M+1, \\ \frac{4}{3M}, &i=2,4,\ldots, M, \\ \frac{2}{3M}, &i=3,5,\ldots, M-1. \end{cases} $$
(5.5)

The matrix form of (5.4) is as follows:

$$ Bf=g, $$
(5.6)

where

$$\begin{aligned}& B_{ij}=-2\sum_{n=1}^{N} \frac{1-e^{-\sqrt{n^{2}+k^{2}}}}{n^{2}+k^{2}}\sin(nx_{i})\sin(nx_{j}) \omega_{i}, \\& f_{j}=\bigl[f(x_{1}),f(x_{2}),\ldots,f(x_{i}),\ldots,f(x_{M+1})\bigr]^{T}, \\& g_{j}=\bigl[g(x_{1}),g(x_{2}),\ldots,g(x_{i}),\ldots,g(x_{M+1})\bigr]^{T}. \end{aligned}$$

Because K is the adjoint operator, \(B^{\ast }=B\). The corresponding Landweber iterative equation is

$$ f^{m,\delta }(x)=a\sum_{k=1}^{m} \bigl(I-aB^{2}\bigr)^{k-1}Bg^{\delta }(x). $$
(5.7)

In the computational procedure, we take \(p=1\). In discrete format, we take \(M=100\), \(N=6\) to compute the direct problem for the forward problem and choose \(M=200\), \(N=6\) to solve the inverse problem.

Example 1

Through calculating, we know that the function \(u(x,y)=(1-e^{-\sqrt{2}ky})\sin(kx)\) and the function \(f(x)=-2k^{2}\sin(kx)\) are satisfied with the problem (1.1) with exact data \(g(x)=(1-e^{-\sqrt{2}k})\sin(kx)\), and the a priori bound is

$$ E=\bigl\Vert f(\cdot)\bigr\Vert _{p}=\Biggl(\sum _{n=1}^{N}\bigl(n^{2}+k^{2} \bigr)^{p}\bigl\vert (f,X_{n})\bigr\vert ^{2} \Biggr)^{ \frac{1}{2}}. $$
(5.8)

Noise data is generated by adding a random perturbation, that is,

$$g^{\delta }=g+\varepsilon \bigl(\operatorname{rand}\bigl(\operatorname{size}(g)\bigr)\bigr), $$

where ε is relative error level. The total error level g is given by the following:

$$\delta =\bigl\Vert g^{\delta }-g\bigr\Vert =\sqrt{ \frac{1}{M+1}\sum_{i=1}^{M+1} \bigl(g^{ \delta }(x_{i})-g(x_{i})\bigr)^{2}}. $$

Figure 1 shows the numerical results of \(k=2\), \(\varepsilon =0.001\), \(\varepsilon =0.01\), respectively. In Figure 2, when \(\varepsilon =0.001\), we take \(k=4,5\) to solve this problem.

Figure 1
figure 1

The comparison of numerical effects between the exact solution and its regularization solution for Example  1 , \(\pmb{k=2}\) : (a) \(\pmb{\varepsilon =0.001}\) , (b) \(\pmb{\varepsilon =0.01}\) .

Figure 2
figure 2

The comparison of numerical effects between the exact solution and its regularization solution for Example  1 , \(\pmb{\varepsilon =0.001}\) : (a) \(\pmb{k=4}\) , (b) \(\pmb{k=5}\) .

It can be seen from Figures 1-2 that the Landweber iterative regularization method is very effective for solving the inverse problem of the unknown source identification of the modified Helmholtz equation. On the one hand, when the random disturbance increases, the result becomes worse. On the other hand, with the increase of k, the results are slightly worse, which is also consistent with our error estimates.

Example 2

Consider the following discontinuous function:

$$ f(x)= \textstyle\begin{cases} 0, & 0\leq x\leq \frac{\pi }{4}, \\ \frac{4}{\pi }x-1, &\frac{\pi }{4}< x\leq \frac{\pi }{2}, \\ -\frac{4}{\pi }x+3, &\frac{\pi }{2}< x\leq \frac{3}{4}\pi, \\ 0, &\frac{3}{4}\pi < x\leq \pi. \end{cases} $$
(5.9)

Figure 3 shows the numerical results of \(k=1\), \(\varepsilon =0.001\), \(\varepsilon =0.005\), respectively. In Figure 4, when the error is \(\varepsilon =0.01\), we take \(k=2,4\) to solve this problem. Figures 3-4 show the comparisons of the numerical effects between the exact solution and the regularization solution for the a priori and a posteriori regularization parameter choice rule with Example 2.

Figure 3
figure 3

The comparison of numerical effects between the exact solution and its regularization solution for Example  2 , \(\pmb{k=1}\) : (a) \(\pmb{\varepsilon =0.001}\) , (b) \(\pmb{\varepsilon =0.005}\) .

Figure 4
figure 4

The comparison of numerical effects between the exact solution and its regularization solution for Example  2 , \(\pmb{\varepsilon =0.01}\) : (a) \(\pmb{k=2}\) , (b) \(\pmb{k=4}\) .

Example 3

Consider the following discontinuous function:

$$ f(x)= \textstyle\begin{cases} 0, & 0\leq x\leq \frac{\pi }{3}, \\ 1, &\frac{\pi }{3}< x\leq \frac{2}{3}\pi, \\ 0, &\frac{2}{3}\pi < x\leq \pi. \end{cases} $$
(5.10)

Figure 5 shows the numerical results of \(k=1\), \(\varepsilon =0.01\), \(\varepsilon =0.05\), respectively. In Figure 6, when the error is \(\varepsilon =0.001\), we take \(k=2,4\) to solve this problem. Figures 5-6 show the comparisons of the numerical effects between the exact solution and the regularization solution for the a priori and a posteriori regularization parameter choice rule with Example 3.

Figure 5
figure 5

The comparison of numerical effects between the exact solution and its regularization solution for Example  3 , \(\pmb{k=1}\) : (a) \(\pmb{\varepsilon =0.01}\) , (b) \(\pmb{\varepsilon =0.05}\) .

Figure 6
figure 6

The comparison of numerical effects between the exact solution and its regularization solution for Example  3 , \(\pmb{\varepsilon =0.001}\) : (a) \(\pmb{k=2}\) , (b) \(\pmb{k=4}\) .

From Figures 3-6, we can find that the smaller ε, the better the computed approximation is. Moreover, we can also easily find that the a posteriori parameter choice rule works better than the a priori parameter choice rule. This is consistent with our theoretical analysis.

6 Conclusion

In this paper, we investigated the inverse source problem for the modified Helmholtz equation. The conditional stability was given. The Landweber iterative method is proposed to obtain a regularization solution. The error estimates were obtained under an a priori regularization parameter choice rule and an a posteriori regularization parameter choice rule, respectively. Meanwhile, numerical examples verifies that the Landweber iterative method has efficiency and accuracy.