Introduction

Integral equations are widely involved in a variety of applications such as potential theory, optimal control, electromagnetic scattering antenna synthesis problem, physics and other applications, see [10, 27, 29, 40, 42]. In many applications in science and engineering such as medical imaging (CT scanning, electrocardiography, etc.), geophysical prospecting (searching for oil, land-mines, etc.), image deblurring (astronomy, crime scene investigations, etc.), spectroscopy, signal processing and image processing [24, 30], the relation between the quantity observed and the quantity to be measured can be formulated as a Fredholm integral equation of the first kind.

Therefore, it should be no surprise that the study of such problems and their solving methods are very substantial literature on applications. We consider the following Fredholm integral equation of the first kind as the generic form

$$\begin{aligned} \int _{a}^{b}K(s, t)x(t)\mathrm{d}t=g(s), \quad c\le s \le d, \end{aligned}$$
(1)

where the functions g (the right-hand side function) and K (the kernel) are known. Here, the function x is the unknown function which must be determined.

In the early twentieth century, Hadamard [19] described the conditions for well-posed problems, that is to say, a problem is well-posed when it satisfied the following three conditions:

  1. 1.

    The problem has a solution.

  2. 2.

    The solution is unique.

  3. 3.

    The solution depends continuously on the data(stability).

If at least one of the above conditions is violated in a problem, it is referred to as the ill-posed problem. The violations of conditions 1 and 2 can often be remedied with a slight re-formulation. The violation of stability is much harder than two other conditions because it implies that a small perturbation in the data leads to a large perturbation in the approximate solution [24].

Fredholm integral equations of the first kind are intrinsically ill-posed [11, 24, 28]; that is to say, the solution is extremely sensitive to small perturbations. After discretizing Eq. (1), the most of classical numerical methods, such as LU, QR and Cholesky factorizations, are failed to compute an appropriate solution for (1), see [20, 23]. These difficulties are inseparably connected with the compactness of the operator which is associated with the kernel K [28]. Several numerical methods have been employed to approximate the solution of (1), see [1, 4, 8, 12, 17, 25, 31, 35, 39].

In many practical applications, which are modeled as (1), the kernel K is given exactly by the underlying mathematical model and the function g typically consists of measured quantities, i.e., g is only known with a certain accuracy and only in a finite set of points [20]. Due to the ill-posedness of the problem, numerical solutions are very sensitive to perturbations and noises. Usually, these kinds of perturbations come from observation, measuring and rounding errors. Therefore, we are interested to consider Eq. (1) with noisy function g. Solving an ill-posed problem with noisy data should be executed by regularization methods [16]. In this paper, we present a self-regularized iterative method and explain that why our method can play as a regularization method.

After discretizing (1), we have the following minimizing problem

$$\begin{aligned} \min _{x \in {\mathbb {R}}^n} \Vert Ax-b\Vert _{2}, \end{aligned}$$
(2)

where \(A \in {\mathbb {R}}^{n\times n}.\) Note that we consider the minimization problem (2) which is more general than solving linear system \(Ax=b.\) It should be mentioned that a linear system of equations may be consistent with noise-free data whereas using noisy data, which we are interested in, may change it to an inconsistent system. Therefore, considering the minimization problem is more general than the liner system. Eq. (2) is a discrete ill-posed problem in general if the singular values of the matrix A decay gradually to zero and the matrix A be an ill-conditioned (i.e., the condition number of A is large) [23]. Our linear system of equations here is a discrete ill-posed problem. Therefore, the condition number of the matrix A is so large and it increases with the size of the matrix A. As a result, rounding errors prevent the computation of a meaningful solution [20].

When solving a set of linear ill-posed equations by an iterative method typically, the iterates first improve, while at later stages the influence of the noise becomes more and more noticeable. This phenomenon is called semi-convergence by Natterer [32, p. 89].

In this paper, we consider the following Landweber-type method

$$\begin{aligned} x^{k+1}=x^k+\lambda _k A^\mathrm{T}M(b-Ax^k),\,\,k=0,1,\dots , \end{aligned}$$
(3)

where \(\lambda _k\) is a relaxation parameter and M is a given symmetric positive definite matrix. Several well-known methods can be written in the form of (3) for appropriate choices of the matrix M. With M equal to the identity, we get the classical Landweber method [29]. Cimmino’s method [9] is obtained with \(M = \frac{1}{m}{\mathrm {diag}}(1/\Vert a_i \Vert ^2)\) where \(a_i\) denotes the ith row of A. The CAV method [7] uses \(M={\mathrm {diag}}(1/\sum _{j=1}^n N_j a_{ij}^2)\) where \(N_j\) is the number of nonzeroes in the jth column of A.

Convergence result and recent extensions, including block versions of (3), can be found in [6, 26]. Their condition is necessary for convergence analysis. Here, we present a necessary and sufficient condition for the convergence analysis of (3) and explain that the iterative method can be a regularization method where the relaxation parameter is constant.

We use the following notation. Let R(A) and \(\Vert x\Vert =\sqrt{x^\mathrm{T} x}\) denote the range of a matrix and the definition of  2-norm, respectviely. The Moore-Penrose pseudoinverse of A is denoted by \(A^{\dagger }.\) Further for a symmetric positive definite (SPD) matrix M, \(\Vert x\Vert _M=\sqrt{\langle Mx,x\rangle }\) and \(M^{1/2}\) denote a weighted Euclidean norm and the square root of M, respectively. The identity matrix of a proper size is denoted by I.

The paper is organized as follows. In "Popular direct regularization methods" section, we recall direct regularization methods as Tikhonov and truncated singular value decomposition. Furthermore, we remind their filter factors showing how they regularize the ill-posed problems. "Iterative regularization methods" section deals with Landweber-type iterative method and the modulus-based iterative methods. The "Landweber-type methods" section discusses the self-regularization property of Landweber-type iterative method and presents the filter factor of Landweber-type iterative method. Furthermore, we give a necessary and sufficient condition for its convergence analysis. We also reintroduce different strategies of relaxation parameters, which are studied in [13, 14, 34]. We next consider the projected version of Landweber-type iterative method.

We describe the modulus-based iterative methods for constrained Tikhonov regularization [1] in "Modulus-based iterative methods for constrained Tikhonov regularization" section. In the last section, i.e., "Numerical results" section, we present the outcome of numerical experiments using examples taken from [23] for the Fredholm integral equation of the first kind.

Popular direct regularization methods

In this section, we briefly present and discuss the necessity of regularizing discrete ill-posed problems and recall the most popular regularization methods as the truncated SVD (TSVD) and Tikhonov regularization method.

One of the most popular methods for solving (1) or its discrete version, i.e. (2), is the Tikhonov regularization method. The classical way to filter out the high-frequency components associated with the small singular values is to apply regularization to the problem. The regularization method was established by Tikhonov [38, 39] and Phillips [36]. In its original framework, regularization is applied directly to the integral equation (1), see, e.g., [41]. In this presentation, we will restrict our discussion to regularization of (2).

The Tikhonov regularization method replaces (2) by the following minimization problem

$$\begin{aligned} \min _{x\in {\mathbb {R}}^n} \left\| \left( \begin{array}{c} A \\ \mu L \\ \end{array} \right) x-\left( \begin{array}{c} b \\ 0 \\ \end{array} \right) \right\| , \end{aligned}$$
(4)

where \(\mu\) is referred to the regularization parameter and the matrix L is appropriately chosen. Typically, L is an identity matrix or a discrete approximation of a derivative operator [20]. The normal equations regarding the minimization problem (4) is

$$\begin{aligned} (A^\mathrm{T} A+ \mu ^2 L^\mathrm{T} L)x =A^\mathrm{T} b. \end{aligned}$$
(5)

Thus, the solution of (4) can be written as

$$\begin{aligned} x_{\mu } =(A^\mathrm{T}A+ \mu ^2 L^\mathrm{T}L)^{-1}A^\mathrm{T}b. \end{aligned}$$
(6)

Let \(L=I\) and consider the following additive noise model

$$\begin{aligned} b=\bar{b}+\delta b, \end{aligned}$$

where \(\bar{b}\) is the noise-free right-hand side and \(\delta b\) is the noise-component. Using the singular value decomposition (SVD) of the matrix \(A=U \varSigma V^\mathrm{T}\), one easily gets

$$\begin{aligned} x_{\mu }&=(A^\mathrm{T}A+ \mu ^2 I)^{-1}A^\mathrm{T}b \nonumber \\&=(V \varSigma U^\mathrm{T} U\varSigma V^\mathrm{T}+\mu ^2 I)^{-1}V \varSigma U^\mathrm{T}b \nonumber \\&=\sum _{i=1}^{p}\frac{\sigma _i^2}{\sigma _i^2+\mu ^2}\frac{u{_i}^\mathrm{T} (\bar{b}+\delta b)}{\sigma _i}v_i, \end{aligned}$$
(7)

where \(\{\sigma _i\}_{i=1}^p\) are the singular values of A and \({\hbox {rank}}(A)=p.\) Also \(\{u_i\}_{i=1}^p\) and \(\{v_i\}_{i=1}^p\) denote columns of U and V, respectively. The functions

$$\begin{aligned} \phi _i= \frac{\sigma _i^2}{\sigma _i^2+\mu ^2}, \,\,i=1,2,\dots , p, \end{aligned}$$
(8)

are called filter factors, see, e.g., [5] and [22, p. 138]. We will discuss the filter factors (8) later.

Similar expression as (7) can be obtained by the truncated SVD (TSVD), see [18, 22] for more information on TSVD. Indeed, the truncated approximate solution of (2) can be written as

$$\begin{aligned} x_{\mathrm{tsvd}}^k=\sum _{i=1}^{k} {\frac{{u}_{i}^\mathrm{T} (\bar{b}+\delta b)}{\sigma _i}}{v}_i, \end{aligned}$$
(9)

for \(k=1,\ldots ,p.\) The filter factors of TSVD method are

$$\begin{aligned} \phi _i= & {} \left\{ \begin{array}{ll} 1 &{}\quad i\le k\\ 0 &{}\quad i>k, \end{array} \right. \end{aligned}$$

Since the singular values of A, i.e., \(\sigma _i,\) decay gradually to zero (the nature of discrete ill-posed problems), the quantity inside the summation (9) goes to infinity. Therefore, we should remark here that using more singular values in (9) leads to get larger norm values for TSVD solution. For that reason, the filter factors of TSVD try to cut off the "unwanted" singular values. But, it is known that \(x_{\mathrm{tsvd}}^{p}=A^{\dag }b\) is the solution of (2) with the minimum norm. After computing the singular values in TSVD method, one may cut off them at some point to get a proper solution for the problem. This contradictory remark shows a proper motivation that the regularization methods must be used for ill-posed problems. However, finding a proper singular value and neglecting the rest of them is a problem which is called finding regularization parameter. We would like to obtain a method which recognizes somehow automatically and cheaply the regularization parameter. The Tikhonov filter factors (8) have somehow such ability. As it is seen, when singular values decay to zero, the filter factors (8) have the same behavior of singular values. But the most important difficulty in Tikhonov regularization parameter is its long computational time (as TSVD) because of SVD’s computational complexity.

Iterative regularization methods

In this section, we recall Landweber-type iterative method and discuss its self-regularization property. Furthermore, we give a necessary and sufficient condition for its convergence analysis. Furthermore, we describe the modulus-based iterative methods for constrained Tikhonov regularization [1].

Landweber-type methods

In order to better understand the mechanism of semi-convergence, we take a closer look at the errors in the regularized solution using the Landweber-type method (3) with a constant relaxation parameter, i.e., \(\lambda _k=\lambda .\)

Let \(x^{*}=\arg \min \Vert A x-\bar{b}\Vert _M\) be the unique weighted least squares solution of minimal 2-norm.

Theorem 1

Let\(\lambda _k=\lambda\)for\(k\ge 0.\)Then, the iterates of (3) converge to a solution (call\(\widehat{x}\)) of (2) if and only if\(0< \lambda < 2/\sigma _1^2\)with\(\sigma _1\)the largest singular value of\(M^{\frac{1}{2}}A.\)If in addition\(x^0\in R(A^{\mathrm{T}})\)then\(\widehat{x}\)is the unique solution with minimal Euclidean norm.

Proof

We assume, without loss of generality, that \({x}^0=0.\) Let

$$\begin{aligned} B=A^\mathrm{T}MA \text { and } c=A^\mathrm{T}Mb. \end{aligned}$$

Then using (3), we get

$$\begin{aligned} {x}^k&=(I-\lambda B)x^{k-1}+\lambda c \\&=\lambda \sum _{j=0}^{k-1}(I-\lambda B)^{k-j-1} {c}. \end{aligned}$$

Suppose that

$$\begin{aligned} M^{\frac{1}{2}}A=U\varSigma V^\mathrm{T} , \end{aligned}$$

is the singular value decomposition (SVD) of \(M^{\frac{1}{2}}A,\) where \(M^{\frac{1}{2}}\) is the square root of M. Therefore, we can present the matrix B as follows:

$$\begin{aligned} B=(M^{\frac{1}{2}}A)^\mathrm{T}(M^{\frac{1}{2}}A)=V\varSigma ^\mathrm{T}\varSigma V^\mathrm{T}=VFV^\mathrm{T}, \end{aligned}$$
(10)

where

$$\begin{aligned} F=\text {diag}({\sigma }_{1}^2,{\sigma }_{2}^2,\dots ,{\sigma }_{p}^2,0,\dots ,0), \text { and } \sigma _1\ge \sigma _2\ge \dots \ge \sigma _p > 0, \end{aligned}$$

and rank\((A)=p.\)

Using (10), we get

$$\begin{aligned} \sum _{j=0}^{k-1}(I-\lambda B)^{k-j-1}=VE_kV^\mathrm{T}, \end{aligned}$$

where

$$\begin{aligned} E_k=\text {diag}\left( {\frac{1-(1-\lambda {\sigma }_{1}^2)^k}{\lambda {\sigma }_{1}^2}}, \dots ,{\frac{1-(1-\lambda {\sigma }_{p}^2)^k}{\lambda {\sigma }_{p}^2}},k,\dots ,k\right) . \end{aligned}$$
(11)

It follows,

$$\begin{aligned} {x}^k=V(\lambda E_k)V^\mathrm{T}{c}&=V(\lambda E_k){\varSigma }^\mathrm{T}U^\mathrm{T}M^{\frac{1}{2}}(\bar{b}+ \delta b)\nonumber \\&=\sum _{i=1}^{p}\left\{ 1-(1-\lambda {\sigma }_{i}^2)^k\right\} {\frac{{u}_{i}^\mathrm{T}M^{\frac{1}{2}} (\bar{b}+\delta b)}{\sigma _i}}{v}_i. \end{aligned}$$
(12)

Using the SVD one easily finds

$$\begin{aligned} x^{*}=V\bar{E}U^\mathrm{T}M^{\frac{1}{2}}{\bar{b}}, \end{aligned}$$
(13)

where

$$\begin{aligned} \bar{E}=\text {diag}\left( {\frac{1}{{\sigma }_{1}}}, \dots , {\frac{1}{{\sigma }_{p}}},0,\dots ,0\right) . \end{aligned}$$
(14)

Also note that if \(|1-\lambda {\sigma }_{i}^2|<1\) for \(i=1,2,\dots ,p\), that is, \(0<\lambda <{\frac{2}{{\sigma }_{1}^2}}\), we get

$$\begin{aligned} \lim _{k\rightarrow \infty }(\lambda E_k\varSigma ^\mathrm{T})=\bar{E}. \end{aligned}$$

which completes the proof. \(\square\)

The functions

$$\begin{aligned} \phi _i= 1-(1-\lambda {\sigma }_{i}^2)^k, \,\,i=1,2,\dots , p, \end{aligned}$$
(15)

in (12) are the filter factors of the Landweber-type method (3).

On the other hand, the obtained coefficients [see (12)]

$$\begin{aligned} \{1-(1-\lambda {\sigma }_{i}^2)^k\}/\sigma _i, \end{aligned}$$

are closed to one where the singular values are small enough. Since the mentioned filter factors try to cancel small singular values in the denominator of (12), we call the iterative method (3) by self-regularized iterative method.

We now recall different strategies of relaxation parameters which are studied in [13, 14, 34]. We first propose the following rules, see [14], for picking relaxation parameters in (3):

$$\begin{aligned} \text {(Strategy II) }\,\,\lambda _ k= & {} \left\{ \begin{array}{ll} \frac{\sqrt{2}}{\sigma _1^2} &{}\quad {\mathrm {for}} \,\, k = 0,1, \\ \frac{2}{\sigma _1^2}(1-\zeta _k) &{}\quad \text {for} \,\, k\ge 2, \end{array}\right. \end{aligned}$$
(16)
$$\begin{aligned} \text {(Strategy III) }\,\,\lambda _k= & {} \left\{ \begin{array}{ll} \frac{\sqrt{2}}{\sigma _1^2} &{}\quad {\mathrm {for}} \,\, k =0,1, \\ \frac{2}{\sigma _1^2}\frac{1-\zeta _k}{(1-\zeta _k^k)^2} &{}\quad {\mathrm {for}} \,\, k \ge 2 . \end{array}\right. \end{aligned}$$
(17)

where \(\zeta _k\) is the unique root in (0, 1) of

$$\begin{aligned} g_{k-1}(y) = (2k-1)y^{k-1}-(y^{k-2}+\dots +y+1) = 0, \end{aligned}$$

and \(\sigma _1\) is the largest singular value of \(M^{1/2}A.\) As mentioned in [14, Table 3.1], the roots \(\{\zeta _k\}_{k=2}^{\infty }\) can easily be precalculated. The aim of introducing above relaxation parameters was to control (postpone) the semi-convergence phenomenon.

To reduce error in the iterative method (3), the following relaxation parameters are studied by [34]:

$$\begin{aligned} \text {(Strategy I) }\,\,\lambda _k= & {} \frac{\Vert M^{1/2}r^k\Vert ^2}{\Vert A^\mathrm{T} M r^k\Vert ^2}, \end{aligned}$$
(18)
$$\begin{aligned} \text {(Strategy IV) }\,\,\lambda _k= & {} \frac{\Vert u^k\Vert ^2}{\Vert M^{1/2} A u^k\Vert ^2}, \end{aligned}$$
(19)

where \(r^k=b-Ax^k\) and \(u^k=A^\mathrm{T}Mr^k\) for \(k=0,1,\dots .\)

The use of a priori information (like nonnegativity) when solving an inverse problem is a well-known technique to improve the quality of the reconstruction. An advantage with projection type algorithms is the possibility to adapt them to handle convex constraints. Then usually, the iterates can be shown to converge toward a member of the underlying convex feasibility problem [3]. We next consider the projected version of (3) as follows:

$$\begin{aligned} x^{k+1}=P_{\varOmega }\left( x^k+\lambda _k A^\mathrm{T}M(b-Ax^k)\right) , \end{aligned}$$
(20)

where \(P_{\varOmega }\) denotes the orthogonal projection onto a closed convex set \(\varOmega \in {\mathbb {R}}^n.\) The convergence analysis of (20) is studied by many researchers, see, e.g., [34, section 6] and [15, Theorem 1] for some special cases of (20) and [33, section 4.1] for the general case. As a result of those papers, the iterative method (20) with the relaxation parameters (1619) converges to a point in \(\varOmega \cap S\) where S is the set of solutions of (2).

Modulus-based iterative methods for constrained Tikhonov regularization

The method is based on modulus-based iterative methods and its application to Tikhonov regularization with nonnegativity constraint. Indeed, modulus-based iterative methods try to find a nonnegative solution for (2). Therefore, using those iterative methods involve the nonnegativity constraints which is useful to improve the quality of the reconstruction.

The idea behind the method is to reduce the computational effort for large-scale problems by using a Krylov subspace method with a fixed Krylov subspace. We briefly explain it, see [1] for more details. In the method, a matrix \(A\in {\mathbb {R}}^{m\times n}\) is reduced to a small bidiagonal matrix by Golub–Kahan bidiagonalization algorithm [18]. Let \(l \ll \min \lbrace m,n \rbrace .\) Applying l steps of Golub–Kahan bidiagonalization on the matrix A with initial vector \(u_{1}=\frac{b}{\Vert b\Vert }\) gives the following decomposition

$$\begin{aligned} AV_{l}=U_{l+1}B_{l+1,l}\,, A^\mathrm{T}U_{l}=V_{l}B_{l,l}^\mathrm{T}, \end{aligned}$$
(21)

where U and V have orthonormal columns. Here, B is a lower bidiagonal matrix with positive diagonal and subdiagonal entries. Substituting \(x=V_{l}y\) into (5) (where \(L=I_{n}\) and \(y \in {\mathbb {R}}^{l}\)) and using (21) give equation

$$\begin{aligned} T_{l,\mu }y=\hat{b}, \end{aligned}$$
(22)

where \(T_{l,\mu }=B_{l+1,l}^\mathrm{T}B_{l+1,l}+\mu ^2 I_{l}\) and \(\hat{b}=e_{1}\Vert A^\mathrm{T}b\Vert .\) Here, the vector \(e_{j}\) denotes the j-th column of an identity matrix with appropriate order. If we denote the largest singular value of the matrix \(B_{l+1,l}\) with \(\sigma _{\max },\) the largest eigenvalue of \(T_{l,\mu }\) is \(\sigma _{\max } ^{2}+\mu ^2\) and the smallest eigenvalue is close to \(\mu ^2\) (indeed bounded below by \(\mu ^2\)). This yields the following algorithm and we call it modulus-based iterative (MBI) method

Algorithm 1

MBI method

  1. 1.

    \(y_{0}=V_{l}^{T}x_{0}\)

  2. 2.

    \({\tilde{y}}_{0}=V_{l}^{T}|V_{l}y_{0}|\)

  3. 3.

    For \(k=1,2,3,\dots\)

  4. 4.

       \(\quad y_{k+1}=(\alpha I_{l}+T_{l,\mu })^{-1} \left( (\alpha I_{l}-T_{l,\mu }){\tilde{y}}_{k}+\hat{b})\right)\)

  5. 5.

       \(\quad {\tilde{y}}_{k+1}=V_{l}^\mathrm{T}|V_{l}y_{k+1}|\)

  6. 6.

    End For

  7. 7.

    \(x=V_{l}{\tilde{y}}_{k+1}+|V_{l}{\tilde{y}}_{k+1}|\)

where the relaxation parameter \(\alpha\) is defined as \(\alpha =\sqrt{(\sigma _{\max } ^{2}+\mu ^2)\mu ^2}.\) To see how the regularization parameter \(\mu\) and the starting point \(x_0\) can be calculated, follow [1, Algorithm 2]. Here, the absolute function, i.e., |.|, for vectors is component wise.

Numerical results

In this section, we examine the effectiveness of the strategies (I-IV) (see "Landweber-type methods" section) for the Landweber-type method (20). We present several examples that illustrate the performance of the strategies and justify the accuracy and efficiency of algorithm (20) where \(\varOmega \in {\mathbb {R}}^n\) is the nonnegative orthant. We compare algorithm (20) (using four strategies) with MBI algorithm, i.e., [1, Algorithm 2], and the Tikhonov direct method.

All examples are obtained by discretizing the Fredholm integral equation (1). The discretized version is the discrete ill-posed problem (2). Since, the considered discrete version is a consistent linear system of equations, the minimization problem (2) is replaced by the linear system \(Ax=b.\) We consider noise-free and noisy data with different noise levels \((\frac{\Vert b-\bar{b}\Vert }{\Vert \bar{b}\Vert })\)\(0.01\%, 0.1\%, 1\%, 5\%\) and \(10\%.\) We use additive independent Gaussian noise of mean zero. The examples Phillips [36], Shaw [37], Foxgood [2] and Gravity [21] are produced by Regularization Tools [23]. In all examples of this section, we consider matrix A of dimension \(800\times 800\) and the noise-free right-hand side is calculated by Regularization Tools [23]. The matrix M in (20) is the weight matrix of CAV’s method. The initial iterate is \(x^{0}=0\) in algorithm (20) and for the MBI algorithm we follow the instruction of [1, Algorithm 2]. Furthermore, in MBI algorithm, we let \(\tau =1.01\) and \({ l }=30.\)

In order to analyze the errors, we compute the relative error The numerical examples are compared to each other by computing relative error \((\frac{\Vert x-x^k\Vert }{\Vert x\Vert })\) and elapsed CPU time in seconds. All computations were carried out running the Matlab version R2011b on a laptop computer with an Intel Core i5 and 4 GB of RAM.

We next present our four examples that are based on Fredholm integral equation, i.e., (1). We use Matlab functions of [23] to produce the matrix A, the right-hand side vector b and the vector x which is the discrete version of the solution of (1) for the following examples. In our examples, the exact solutions are nonnegative, i.e., all components of x are nonnegative.

Example 1

(Phillips test) The integral equation is discretized by Galerkin method using the MATLAB function phillips from [23]. It produces the vectors b, x and matrix A. The coefficient matrix A is symmetric and indefinite. The singular values of the matrix decay gradually to zero. The matrix is ill-conditioned, and its condition number is \(k(A)= 1.08 \times 10^{10}.\)

Example 2

(Shaw test) The discretization is carried out by a simple quadrature rule using the function shaw from [23]. It generates the matrix A and vector x. Then, the right-hand side is computed as \(b = Ax.\) The obtained symmetric indefinite matrix A has the condition number \(k(A)=2.40 \times 10^{20}.\) The singular values of A decay fairly quickly to zero.

Example 3

(Foxgood test) This equation was first used by Fox and Goodwin [2]. We use the function foxgood from [23] to determine its discrete version using simple quadrature (midpoint rule). The function foxgood generates A, b and x. This gives a symmetric indefinite matrix A. Its singular values cluster at zero and the computed condition number is \(k(A)=1.09\times 10^{20}.\)

Example 4

(Gravity test) We use the function gravity from [23] to produce the matrix A which is a symmetric Toeplitz matrix. The test has a parameter d which is the depth at which the magnetic deposit is located. The default value for the parameter, see [23], is \(d=0.25\) and we use it in our tests. However, using a larger value for d leads to the faster decay of the singular values. The Matlab function gravity generates the matrix A and vector x. Then, the right-hand side is computed as \(b = Ax.\) The condition number for this matrix is \(k(A)=1.41\times 10^{20}.\)

In our four tests, both strategies II and III give almost the same results for different noise levels. Therefore, we only show the results regarding the strategy III in all Figs. 1, 2, 3 and 4.

Furthermore, the strategies I and IV with low-level noise give almost the same results but the relative error of strategy I oscillates when a large amount of noise is involved in the data. Of course, we expect such behavior for the strategy I because its analysis is based on the consistency of the linear system which may be destroyed by noisy data. However, we only report the strategy IV in all figures.

We also use constant relaxation parameter \(\lambda =1.9\) which gives almost the same results as the strategy IV for all examples except the Foxgood test, see Fig. 3. The constant relaxation parameter for that test produces the stable results, i.e., there is no semi-convergence phenomenon, but it makes slow rate of convergence whereas the strategy IV gives stable and very fast convergence results. Also for the Gravity test with \(10\%\) noise, we get the same conclusion as we get for Foxgood test. However, we only present constant relaxation parameter \(\lambda =1.9\) for Foxgood test.

Based on our numerical results, the strategy IV gives the fastest convergence rate and the most stable results compare to different strategies of the relaxation parameter and MBI algorithm, see Figs. 1, 2, 3 and 4.

The convergence rate of MBI algorithm depends on the amount of noise. Indeed, using more noise in data gives faster convergence rate and reverse. However, in our numerical tests, using \(10\%\) noise gives the best results for the MBI algorithm and they may be comparable with the strategy IV. In Fig. 1f (Phillips test), the strategy IV has faster than MBI algorithm in few first iterations and the strategy does not show semi-convergence phenomenon but the MBI algorithm does. The Fig. 2f (Shaw test) shows that the semi-convergence phenomenon happen neither for strategy IV nor MBI algorithm. However, the strategy IV gives better results than MBI algorithm. We not only get the same conclusion as explained for Fig. 3f but also the strategy IV gives much faster convergence rate than MBI algorithm. In Fig. 4f, we have the same interpretation as Phillips test.

Fig. 1
figure 1

Relative error histories for Phillips test

Table 1 Phillips test problem with \(5\%\) noise
Fig. 2
figure 2

Relative error histories for Shaw test

Table 2 Shaw test problem with \(5\%\) noise
Fig. 3
figure 3

Relative error histories for Foxgood test

Table 3 Foxgood test problem with \(5\%\) noise
Fig. 4
figure 4

Relative error histories for Gravity test

Table 4 Gravity test problem with \(5\%\) noise

All results in Tables 1, 2, 3 and 4 report the average relative errors and CPU times (in seconds) for different methods over 100 runs within 30 iterations and using \(5\%\) noise. We report the results at 30-th iteration where MBI algorithm does not show semi-convergence behavior and that iteration gives the smallest relative error for the method. Furthermore, we use the function discrep from [23], i.e. Discrepancy principle criterion, to compute Tikhonov’s regularization parameter. We also use the exact value of \(\Vert \delta b\Vert\) in the function discrep. Based on the explained results on Figs. 1, 2, 3 and 4, we only report the results of strategy IV, constant relaxation parameter \(\lambda =1.9,\) MBI algorithm and Tikhonov method with \(5\%\) noise.

The Tables 1, 2, 3 and 4 show that the most time-consuming method is Tikhonov and its relative error is larger than the cases strategy IV and the MBI algorithm. The relative error of Tikhonov method is smaller than the case \(\lambda =1.9\) (except the Shaw test, see Table 2). Furthermore, using \(\lambda =1.9\) (within 30 iterations) produces larger relative error than strategy IV and the MBI algorithm whereas it is minimum time-consuming strategy. Therefore, one may only compare two successful cases, i.e., strategy IV and the MBI algorithm. Both cases have almost the same computational times, whereas the relative error of strategy IV is smaller than the MBI algorithm.

Conclusion

We recall Landweber-type iterative method to obtain a solution for the Fredholm integral equation of the first kind with noisy data. We present the self-regularization property of the iterative method and give a necessary and sufficient condition for the convergence analysis of the iterative method with constant relaxation parameter. Our numerical results confirm that Landweber-type iterative method is able to produce accurate and stable results for the integral equation with noisy data. However, MBI algorithm shows proper results as our method, whereas the noise level is enough large.