# Data based regularization for discrete deconvolution problems

## Authors

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s10543-013-0421-9

- Cite this article as:
- Huckle, T. & Sedlacek, M. Bit Numer Math (2013) 53: 459. doi:10.1007/s10543-013-0421-9

- 164 Views

## Abstract

We focus on the solution of discrete deconvolution problems to recover the original information from blurred signals in the presence of Gaussian white noise more accurately. For a certain class of blur operators and signals we develop a diagonal preconditioner to improve the reconstruction quality, both for direct and iterative regularization methods. In this respect, we incorporate the variation of the signal data during the construction of the preconditioner. Embedding this method in an outer iteration may yield further improvement of the solution. Numerical examples demonstrate the effect of the presented approach.

### Keywords

Tikhonov-PhillipsTSVDCGLSIll-posed inverse problems### Mathematics Subject Classification (2010)

65F2265F0865R30## 1 Introduction

*x*∈ℝ

^{n}is the original signal or image,

*H*∈ℝ

^{n×n}is the blur operator,

*η*∈ℝ

^{n}is a vector representing the unknown perturbations such as noise or measurement errors, and

*b*∈ℝ

^{n}is the observed signal and image, respectively. Our aim is to recover

*x*as good as possible. Because

*H*may be extremely ill-conditioned or even singular, and, because of the presence of the noise, the direct solution of (1.1) will result in a useless reconstruction dominated by noise. Consequently, to avoid this and solve (1.1) mainly on the signal subspace, corresponding to large singular values, a regularization technique has to be applied.

### 1.1 Truncated singular value decomposition

*H*is given by

*A*located on the diagonal of

*Σ*. Based on a decomposition such as the QR factorization or the SVD, direct regularization methods can be seen as a spectral filter acting on the singular spectrum, diminishing the deterioration of the solution by noise. As an explicit splitting into the signal and noise subspace is impossible, the

*truncated singular value decomposition*(TSVD) [7, 9] is an intuitive approach shrinking the SVD expansion such that the solution will mostly consist of quantities corresponding to the signal part, i.e.,

*H*

_{k}is the closest rank-

*k*approximation to

*H*.

*k*in (1.2), we can use the discrepancy principle [3]: considering a stochastic setting where the error in each component of

*η*has a standard deviation

*ξ*, i.e.,

*b*is affected by white noise of magnitude

*ξ*, the expected value of the perturbation norm is \(\lVert\eta \rVert_{2} = \sqrt{n}\xi\). Thus we choose

*k*by

*ν*>1 is a constant safety factor. Note that for this the error norm or the deviation

*ξ*has to be known in advance. If this is not the case, one has to resort to other estimation techniques such as the (discrete) L-curve criterion; see, e.g., [9].

### 1.2 Tikhonov-Phillips regularization

*Tikhonov-Phillips regularization*(TPR) [18, 19] in general form solves

*L*∈ℝ

^{l×n},

*l*≤

*n*is called the regularization matrix. The regularization parameter

*α*≥0 is to be chosen such that both minimization criteria yield the optimal value together: the computed solution

*x*

_{α}should be as close as possible to the original problem and sufficiently regular. Following the discrepancy principle, we can choose

*α*so that

*α*which can be solved, e.g., by using a root finder such as Newton’s method.

Instead of using the 2-norm as a means to control the error in the solution and to obtain regularity, i.e., instead of using *L*=*I* in (1.4) leading to standard TPR, another possibility is to use discrete smoothing norms \(\lVert L_{k}x_{\alpha} \rVert_{2}\), where *L*_{k} usually is an approximation to the first or second derivative operator [5, 9]. For problems where the exact signal *x* is smooth, the solution of the general formulation (1.4), using a differential operator, will be smoother and thus a more accurate reconstruction. Further recently published approaches are possible: in [12] we incorporate spectral information of *H* in the regularization matrix being an *H*-dependent linear operator which acts like the identity in the noise subspace and like the null matrix in the signal subspace, i.e., which acts like a high pass filter in the Fourier context. Noschese and Reichel [16] propose a seminorm for TPR which is constructed by solving \(\min_{L}\lVert Lx_{\alpha}\rVert_{2}\) for a prescribed structure of *L*.

### 1.3 (Preconditioned) iterative regularization

Within the class of *iterative regularization methods*, we focus on *(P)CGLS* [1] as it is a stable way to implement the CG method on the normal equations *H*^{T}*Hx*=*H*^{T}*b* for least squares problems in the general case. The typical observation, which coincides, e.g., with the CG convergence analysis [3] is that in the first iterations the error is reduced relative to large eigenvalues. In later steps, the eigenspectrum related to noise and small eigenvalues dominates the evolution of the approximate solution. Therefore, the restoration has to stop after a few iterations before the method starts to reduce the error relative to the noise subspace. Similar to the TSVD, we can apply the discrepancy principle and stop the reconstruction after the *k*th iteration determined by (1.3). Note that following [14], both MINRES and GMRES are not suited for the reconstruction of ill-posed image deblurring problems because they do not suppress the noise contribution sufficiently. Compared to these two methods in general, MR-II [4] and RRGMRES [2] are superior when used to reconstruct ill-posed problems.

*M*should have the following properties:

on the signal subspace

*M*≈|*H*|^{−1}where \(|H|=(H^{T}H)^{\frac{1}{2}}\), andon the noise subspace

*M*≈*I*or*M*≈0.

*H*, i.e., the signal structure is mainly preserved.

The outline of the paper is the following: In Section 2, we introduce a data based diagonal preconditioner to improve the reconstruction of regularization methods by incorporating the values of the (observed) signal. Section 3 contains numerical results applying the proposed approach to deconvolution problems from Regularization Tools [8] and artificial ones constructed on our own using ill-conditioned matrices from Matlab [15]. A conclusion with a short outlook closes the discussion in Section 4.

## 2 Data based regularization

### 2.1 Motivation for incorporating the signal data

In many cases the solution vector is not smooth. In such cases the usage of, e.g., TPR with discrete smoothing norms will spoil the reconstruction. In [11] we observed improved reconstructions using preconditioners which act differently around discontinuities of the signal. This can be achieved by weighting those preconditioner components which affect such local effects. An improved reconstruction was observed by applying tridiagonal preconditioners which act like \(\operatorname{tridiag}(1,2,1)\) near continuous components but like \(\operatorname{tridiag}(0,1,0)\) near discontinuities. Hence, it may help to incorporate the behavior of the original signal *x* or some approximation from previous steps, e.g., the observed data vector *b*. However, we want to point out that the idea of estimating the signal structure from the data may not work for inverse problems where these two domains are fundamentally different, e.g., for tomography problems.

*x*+

*θ*of

*x*, where

*x*is the original (exact) signal and

*θ*is some deviation from

*x*. Note that

*x*+

*θ*can, but must not be the observed right-hand side

*b*. The computed solution \(\tilde{x}\), i.e., the reconstruction, can be written as \(\tilde{x} = x+D\delta\), with the deviation

*Dδ*.

### Theorem 2.1

*Assuming*(

*x*+

*θ*)

_{j=1,…,n}≠0

*and*

*Σ*>0,

*the component*-

*wise relative error with respect to the approximation*

*x*+

*θ*

*of the reconstruction*\(\tilde{x}\)

*of the unregularized equation*\(H^{T}H\tilde{x} = H^{T}b\)

*satisfies*

*where*

*UΣV*

^{T}

*is the spectral decomposition of DH*

^{T}.

### Proof

*D*

^{−1}(

*x*+

*θ*)=:

**1**and the SVD of

*DH*

^{T}=

*UΣV*

^{T}, we obtain Therefore, using the deviation (2.2), the computed solution is \(\tilde{x} = x + D(U\varSigma^{-1}V^{T}\eta)\). With the component-wise consideration

*x*+

*θ*as

Hence, for a solution \(\tilde{x} \neq0\), the relative error is in the order of the underlying data noise *η*, if the elements of *Σ*^{−1} are not arbitrary large, i.e., \(\varSigma^{-1} \in\mathcal{O}(1)\).

*Σ*

^{−1}usually can be arbitrary large, we get rid of the noise contribution by truncating the small singular values in the original spectral decomposition \(\varSigma= \operatorname{diag}(\sigma_{1},\ldots,\sigma_{n})\) and denote

*Σ*

_{k}as

### Theorem 2.2

*After truncating the small singular values*,

*corresponding to noise*,

*the regularized solution of*\((\mathit{DH}^{T}\mathit{HD})D^{-1}\tilde{x} = \mathit{DH}^{T}b\)

*is given by*\(D^{-1}\tilde{x} = U\varSigma_{k}^{\dagger}V^{T}b\)

*and bounded by the underlying data noise*

*η*

*via*

*where*

*e*

_{reg}

*denotes the regularization error*.

### Proof

*Σ*

^{−1}by \(\varSigma_{k}^{\dagger}\) results in the solution Hence, with

*U*:=(

*U*

_{1},

*U*

_{2})

^{T}, we obtain

An immediate consequence of Theorem 2.2 is

### Corollary 2.1

*After truncating the small singular values*,

*the component*-

*wise relative error of the solution*\(\tilde{x}\)

*with respect to the approximation*

*x*+

*θ*

*is bounded by*

### Proof

*e*

_{reg}in Theorem 2.2 and observe the following cases where

*σ*

_{k}(

*H*) and

*σ*

_{k}(

*DH*

^{T}) denote the

*k*th singular value of

*H*and

*DH*

^{T}, respectively:

- 1.
- 2.
- 3.
For general

*D*, we obtain an arbitrary weighting possibility.

*x*become of similar importance in contrast to standard regularization with

*D*=

*I*. General

*D*allows general weighting to enforce reconstruction for certain sections of the signal vector.

### 2.2 Data based regularization methods

*x*or

*b*, we construct the data based preconditioner

*D*∈ℝ

^{n×n}as a diagonal matrix with entries

*ε*is mandatory and should be chosen

*ε*≪1 or \(\varepsilon \in\mathcal{O}(\eta)\) if

*b*

_{i}=0, to guarantee the non-singularity of

*D*

_{b}. For

*b*

_{i}≠0, we can choose

*ε*=0. The same holds for

*D*

_{x}.

*D*

^{r}, where \(r \notin\bigl\lbrack\frac{1}{2},2 \bigr\rbrack\) usually leads to less accurate approximations while still improving the reconstruction. For the TSVD the decomposition \(\mathit{HD}^{\frac{1}{2}} = U\varSigma V^{T}\) leads to the modified ill-posed problem

*D*to define an appropriate norm \(\lVert x \rVert_{D}^{2} = x^{T}D^{T}Dx\). Using \(D^{\frac{1}{2}}\), we obtain a modified TPR Note the connection of the preconditioned system \(D^{\frac {1}{2}}H^{T}\mathit{HD}^{\frac{1}{2}}\) in (2.6) to the generalized SVD of the matrix pair \((H,D^{-\frac{1}{2}})\) related to the seminorm \(\lVert x \rVert_{D^{-\frac{1}{2}}}\). Accordingly, we use split preconditioning with \(D^{\frac{1}{2}}\) in PCGLS.

### 2.3 Improvement by outer iterations

It is possible to further improve on a first computed solution and start an iterative process of building diagonal preconditioners based on the current reconstruction. Following our *regularization with outer iterations* (ROI) algorithm, we either use *b* or a first reconstruction from an unpreconditioned regularization method as initial solution \(\tilde{x}^{(0)}\). For a given number of steps we construct \(D_{\tilde{x}^{(s)}}\) from a previously computed reconstruction \(\tilde{x}^{(s-1)}\) and compute a new solution \(\tilde{x}^{(s)}\). As this approach can be applied both to direct and iterative regularization methods we refer to any method presented in Section 1 by using the identifier \(\mathit{Regularization\_method}\). Note that, in general, for every call of the \(\mathit{Regularization\_method}\), there must precede a method for the estimation of the optimal regularization parameter.

In general, the solution improves in the first steps. As in later steps the error saturates and the reconstruction does not change evidently, it is reasonable to perform only a few number of iterations. In most cases, it is sufficient to choose steps ∈[1,5]. Unfortunately, the class of discrete ill-posed problems for which this approach yields improved reconstructions can not be classified exactly. The prerequisite of data based regularization is making the method sensible or even spoiling the solution for problems which do not clearly satisfy the requested property. It turned out, that providing a heuristic stopping criterion for the outer iterations can be a resort for this problem. Following the discrepancy principle [3], we can stop the outer iterative process before reaching the maximum number of given steps if the current residual norm \(\lVert r^{(s)}\rVert_{2}\) drops below the expected value of the perturbation norm.

## 3 Numerical results

We focus on weakly distorting operators from Regularization Tools [8] and Matlab [15], i.e., we consider deconvolution problems.

We assume that the entries of

*η*are random numbers resulting from the same Gaussian distribution with zero mean and standard deviation*ξ*∈ℝ. Hence, we use white noise of different magnitude.We perform all computations on normalized values and measure the quality of the reconstructions via the relative reconstruction error (RRE) \(\| x - \tilde{x}\|_{2} / \| x \|_{2}\), where \(\tilde{x}\) denotes the reconstruction and

*x*the exact solution.We obtain the regularization parameter using the discrepancy principle, i.e., we use a zero finder for (1.5) to determine

*α*in TPR and (1.3) to determine*k*in TSVD and in (P)CGLS. In all cases we choose*ν*=1.1.Note that we use the discrepancy principle twice: once to stop the outer iterations in algorithm ROI and once to estimate the regularization parameter at a given ROI iteration.

All results are given by the RRE followed by the estimated regularization parameter in brackets. Due to the relationship between regularization using a seminorm and regularization of a preconditioned system (see, e.g., (2.7)), we use the term

*Preconditioning*for all regularization methods in our result tables.We denote reconstruction with standard regularization by using

*I*as preconditioner (no preconditioner).We use

*ε*=10^{−8}for \(D_{\tilde{x}}\), where \(\tilde{x}^{(0)} = b\) is the initial solution for ROI. We fix the maximum number of outer iterations to steps=5.For data based regularization with TSVD, we use the SVD of \(\mathit{HD}^{\frac{1}{2}}\), compute the solution \(\tilde{x}_{k}\) via the routine tsvd from [8], and apply the data based preconditioner to the solution by \(\tilde{x}_{k} \leftarrow D^{\frac{1}{2}}_{\tilde{x}^{(s)}}\tilde{x}_{k}\).

To abbreviate our notation, we use

*D*_{s}instead of \(D^{\frac {1}{2}}_{\tilde{x}^{(s)}}\).

### 3.1 Example 1: constant positive pulse sequence

*x*

_{1}of size

*n*=450 which is a pulse rate with constant positive discontinuities after every 100 samples:

*x*

_{1}with the prolate matrix from [15] which has symmetric Toeplitz form and is severely ill-conditioned. Our problem has dimension

*n*=450 and is positive definite as we choose

*w*=0.34, i.e., we invoke gallery(’prolate’, 450,0.34).

*I*, ∼11 % for

*D*

_{1}, ∼3.5 % for

*D*

_{3}, and ∼2.9 % for

*D*

_{5}.

RRE for ROI for the gallery(’prolate’, 450,0.34) [15] operator and signal *x*_{1} (3.1) for noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number *s* gives the step after which ROI is stopped when using the discrepancy principle

Method | Precond. | Noise level | |||
---|---|---|---|---|---|

0.1 % | 1 % | 5 % | 10 % | ||

TPR |
| 0.559 (0.047) | 0.562 (0.165) | 0.576 (0.400) | 0.613 (0.620) |

| 0.134 (0.057) | 0.176 (0.243) | 0.273 (0.623) | 0.384 (0.973) | |

| 0.010 (0.091) \({}^{\ast_{1}}\) | 0.038 (0.296) \({}^{\ast_{1}}\) | 0.150 (0.662) \({}^{\ast_{2}}\) | 0.287 (0.936) \({}^{\ast_{1}}\) | |

| 0.003 (0.095) \({}^{\ast_{1}}\) | 0.029 (0.300) \({}^{\ast_{1}}\) | 0.139 (0.670) \({}^{\ast_{2}}\) | 0.276 (0.947) \({}^{\ast_{1}}\) | |

TSVD |
| 0.560 (308) | 0.565 (306) | 0.579 (291) | 0.618 (260) |

| 0.135 (294) | 0.177 (50) | 0.267 (4) | 0.269 (4) | |

| 0.010 (12) \({}^{\ast_{1}}\) | 0.026 (4) \({}^{\ast_{1}}\) | 0.054 (4) \({}^{\ast_{1}}\) | 0.057 (4) \({}^{\ast_{1}}\) | |

| 0.002 (4) \({}^{\ast_{1}}\) | 0.005 (4) \({}^{\ast_{1}}\) | 0.012 (4) \({}^{\ast_{1}}\) | 0.018 (4) \({}^{\ast_{1}}\) | |

(P)CGLS |
| 0.560 (7) | 0.564 (2) | 0.567 (1) | 0.570 (1) |

| 0.134 (15) | 0.157 (4) | 0.245 (1) | 0.248 (1) | |

| 0.009 (3) \({}^{\ast_{1}}\) | 0.023 (1) \({}^{\ast_{1}}\) | 0.048 (1) \({}^{\ast_{1}}\) | 0.062 (1) \({}^{\ast_{1}}\) | |

| 0.002 (1) \({}^{\ast_{1}}\) | 0.007 (1) \({}^{\ast_{1}}\) | 0.034 (1) \({}^{\ast_{1}}\) | 0.064 (1) \({}^{\ast_{1}}\) |

### 3.2 Example 2: 2D Gaussian blur on problem blur

As a second example, we consider the test problem blur [8] which is deblurring images degraded by atmospheric turbulence blur. The matrix *H* is an *n*^{2}×*n*^{2} symmetric, doubly block Toeplitz matrix that models blurring of an *n*×*n* image by an isotropic Gaussian point-spread function. The parameter *σ* controls the width of *H* and thus the amount of smoothing and ill-posedness. *H* is symmetric block banded and possibly positive definite depending on *n* and *σ*. We choose \(H \in\mathbb{R}^{20^{2} \times20^{2}}\), *band*=3, and *σ*=2, i.e., we invoke blur(20,3,2). We refer to the stacked right-hand side as *x*_{2}.

*ξ*=0.1 % or Figs. 2(c) and (d). For large noise, only weak improvement is gained and the reconstruction is getting spoiled along further outer iterations; cf.

*ξ*=10 % in Table 2. However, in this case, the discrepancy principle prevents from spoiling the solution after the first outer iteration. Note that for

*ξ*=1 %, ROI using TPR is not stopped after 5 steps. Following Fig. 2, the reconstruction is also improved for nonzero components in the signal: see, e.g., the cross in (b) and (c). Similar to Example 1, we observe flattened and improved convergence curves of (P)CGLS in Fig. 3. Note that the optimum using

*D*

_{1}leads to a better reconstruction than for

*D*

_{3}for noise level 1 %.

RRE using ROI for the blur(20,3,2) [8] problem and noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number *s* gives the step after which the outer iteration is stopped when using the discrepancy principle

Method | Precond. | Noise level | |||
---|---|---|---|---|---|

0.1 % | 1 % | 5 % | 10 % | ||

TPR |
| 0.228 (0.007) | 0.358 (0.038) | 0.516 (0.141) | 0.579 (0.235) |

| 0.143 (0.005) | 0.323 (0.030) | 0.518 (0.133) | 0.573 (0.229) | |

| 0.081 (0.019) \({}^{\ast_{1}}\) | 0.269 (0.077) | 0.470 (0.189) \({}^{\ast_{1}}\) | 0.549 (0.288) | |

| 0.074 (0.024) \({}^{\ast_{1}}\) | 0.243 (0.083) | 0.446 (0.191) \({}^{\ast_{1}}\) | 0.547 (0.2851) \({}^{\ast_{3}}\) | |

TSVD |
| 0.257 (254) | 0.398 (114) | 0.585 (36) | 0.629 (20) |

| 0.176 (191) | 0.375 (72) | 0.580 (18) | 0.605 (3) | |

| 0.113 (80) \({}^{\ast_{1}}\) | 0.354 (49) \({}^{\ast_{1}}\) | 0.575 (18) \({}^{\ast_{1}}\) | 0.640 (3) \({}^{\ast_{1}}\) | |

| 0.111 (75) \({}^{\ast_{1}}\) | 0.341 (24) \({}^{\ast_{1}}\) | 0.592 (10) \({}^{\ast_{1}}\) | 0.705 (3) \({}^{\ast_{1}}\) | |

(P)CGLS |
| 0.238 (51) | 0.371 (12) | 0.553 (3) | 0.579 (2) |

| 0.150 (57) | 0.335 (12) | 0.541 (3) | 0.574 (2) | |

| 0.095 (21) \({}^{\ast_{1}}\) | 0.291 (6) \({}^{\ast_{1}}\) | 0.515 (3) \({}^{\ast_{1}}\) | 0.608 (2) \({}^{\ast_{1}}\) | |

| 0.094 (14) \({}^{\ast_{1}}\) | 0.270 (6) \({}^{\ast_{1}}\) | 0.508 (3) \({}^{\ast_{1}}\) | 0.584 (3) \({}^{\ast_{1}}\) |

### 3.3 Example 3: 1D Gaussian blur on signal with positive and negative discontinuities

*n*=450 containing positive and negative discontinuities at The signal

*x*

_{3}is blurred by the 1D Gaussian operator blur1D with

*band*=4 and

*σ*=3. Here, blur1D is the 1D analogy of the 2D blur operator taken from [8].

*ξ*=5 % using TPR, the discrepancy principle stops ROI after one outer iteration, although more improvement is achievable along further outer iterations.

RRE using ROI for the blur1D(450,4,3) problem and signal *x*_{3} for noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number *s* gives the step after which the outer iteration is stopped when using the discrepancy principle

Method | Precond. | Noise level | |||
---|---|---|---|---|---|

0.1 % | 1 % | 5 % | 10 % | ||

TPR |
| 0.131 (0.021) | 0.267 (0.107) | 0.452 (0.321) | 0.601 (0.575) |

| 0.005 (0.056) | 0.048 (0.186) | 0.221 (0.490) | 0.404 (0.840) | |

| 0.003 (0.092) \({}^{\ast_{1}}\) | 0.026 (0.291) \({}^{\ast_{2}}\) | 0.128 (0.645) | 0.248 (0.904) \({}^{\ast_{1}}\) | |

| 0.002 (0.098) \({}^{\ast_{1}}\) | 0.019 (0.310) \({}^{\ast_{2}}\) | 0.095 (0.688)\({}^{\ast_{4}}\) | 0.186 (0.966) \({}^{\ast_{1}}\) | |

TSVD |
| 0.175 (440) | 0.323 (397) | 0.543 (324) | 0.689 (264) |

| 0.002 (56) | 0.041 (97) | 0.193 (104) | 0.385 (103) | |

| 0.002 (8) \({}^{\ast_{1}}\) | 0.027 (10) \({}^{\ast_{1}}\) | 0.152 (10) \({}^{\ast_{1}}\) | 0.313 (16) \({}^{\ast_{1}}\) | |

| 0.001 (8) \({}^{\ast_{1}}\) | 0.025 (8) \({}^{\ast_{1}}\) | 0.137 (8) \({}^{\ast_{1}}\) | 0.285 (8) \({}^{\ast_{1}}\) | |

(P)CGLS |
| 0.152 (185) | 0.293 (43) | 0.461 (16) | 0.573 (10) |

| 0.003 (33) | 0.035 (19) | 0.172 (11) | 0.326 (9) | |

| 0.002 (6) \({}^{\ast_{1}}\) | 0.023 (6) \({}^{\ast_{1}}\) | 0.128 (4) \({}^{\ast_{1}}\) | 0.262 (3) \({}^{\ast_{1}}\) | |

| 0.001 (6) \({}^{\ast_{1}}\) | 0.018 (6) \({}^{\ast_{1}}\) | 0.101 (4) \({}^{\ast_{1}}\) | 0.205 (4) \({}^{\ast_{1}}\) |

### 3.4 Remarks

- 1.
We also compared data based preconditioning when applied in PCGLS, PMINRES [17], and PMR-II [4]. E.g., for the problem blur(32,4,8) from [8] or the ill-conditioned matrix gallery(’gcdmat’) from [15] using a point symmetric signal with a positive and negative sawtooth around \(\frac{n}{2}\), we receive similar results among the selected Krylov methods and obtain improved reconstructions compared to standard regularization without preconditioner.

- 2.
If

*ξ*=0 %, we always observe a strong improvement in the reconstruction, even for more general problems. Hence, the less*b*is affected by noise, the larger the improvement will be when data based regularization is used. - 3.
When using

*D*_{x}(2.5), i.e., the exact signal*x*^{(0)}=*x*in ROI, we observe improvement even for more general signals and distorting operators. In fact, this was our initial motivation to analyze the effect for data based regularization using*x*^{(0)}=*b*. - 4.
For signals not containing zero components, e.g., a constantly shifted pulse sequence corresponding to (3.1), we observe similar reconstruction quality between standard and data based regularization, i.e., we do not obtain improvement.

- 5.
For certain examples, data based regularization may yield improvement although the signal structure is not preserved. E.g., for the problem wing by G. M. Wing from [8] where the signal is a positive flank. Applying TPR or TSVD, we observe that a single application of

*D*_{1}slightly degrades the solution but improves it along further iterations up to a factor of 2. However, the discrepancy principle in ROI does not prevent from spoiling the reconstruction again after a couple of outer iterations. - 6.
In [13] we combine data based regularization with smoothing norms in TPR by proposing the regularization matrix \(L_{k}D^{-1}_{\tilde{x}}\),

*k*=1,2, observing two effects: First, improvement for problems suitable for data based regularization. Second, for smooth signals where discrete smoothing norms yield significant improvement this approach may additionally improve the reconstruction.

## 4 Conclusion

We considered the impact of taking the data of the observed right-hand side into account to improve on the reconstruction when using regularization methods to solve discrete deconvolution problems. As classical regularization methods we used Tikhonov-Phillips regularization and the TSVD while in the class of iterative methods we focused on (P)CGLS to compute a regular solution.

In case of model problems where *H* preserves the shape of the original signal, i.e., performs a weak blurring, and the right-hand side contains many nearly zero components, the usage of data based regularization will improve the reconstruction, especially when the perturbation in the right-hand side is of small order and discontinuities are available. Here, incorporating the signal values in a diagonal matrix *D* is favorable concerning its cheap construction and inversion. Further improvement can be achieved by iteratively applying *D* to the solution iterates. As a heuristic stopping rule we used the discrepancy principle which, for some examples, showed pessimistic behavior, i.e., the outer iteration stopped although it would have been possible to further improve on the solution.

## Acknowledgements

The authors thank the referees and P.C. Hansen for the useful suggestions which helped to establish a condensed and improved presentation of the manuscript.