BIT Numerical Mathematics

, Volume 53, Issue 2, pp 459–473

Data based regularization for discrete deconvolution problems

Authors

    • Fakultät für InformatikTechnische Universität München
  • M. Sedlacek
    • Fakultät für InformatikTechnische Universität München
Article

DOI: 10.1007/s10543-013-0421-9

Cite this article as:
Huckle, T. & Sedlacek, M. Bit Numer Math (2013) 53: 459. doi:10.1007/s10543-013-0421-9
  • 164 Views

Abstract

We focus on the solution of discrete deconvolution problems to recover the original information from blurred signals in the presence of Gaussian white noise more accurately. For a certain class of blur operators and signals we develop a diagonal preconditioner to improve the reconstruction quality, both for direct and iterative regularization methods. In this respect, we incorporate the variation of the signal data during the construction of the preconditioner. Embedding this method in an outer iteration may yield further improvement of the solution. Numerical examples demonstrate the effect of the presented approach.

Keywords

Tikhonov-PhillipsTSVDCGLSIll-posed inverse problems

Mathematics Subject Classification (2010)

65F2265F0865R30

1 Introduction

We consider the discrete ill-posed linear model problem
$$ x \stackrel{\mathit{blur}}{\longrightarrow} Hx \stackrel{\mathit {noise}}{ \longrightarrow} Hx + \eta= b $$
(1.1)
where x∈ℝn is the original signal or image, H∈ℝn×n is the blur operator, η∈ℝn is a vector representing the unknown perturbations such as noise or measurement errors, and b∈ℝn is the observed signal and image, respectively. Our aim is to recover x as good as possible. Because H may be extremely ill-conditioned or even singular, and, because of the presence of the noise, the direct solution of (1.1) will result in a useless reconstruction dominated by noise. Consequently, to avoid this and solve (1.1) mainly on the signal subspace, corresponding to large singular values, a regularization technique has to be applied.

1.1 Truncated singular value decomposition

The singular value decomposition (SVD) of H is given by
$$ H = U\varSigma V^T = \sum_{i=1}^n u_i\sigma_iv_i^T,\quad \text{where}\ \sigma_1 \geq\sigma_2 \geq\cdots\geq \sigma_n \geq0 $$
are the singular values of A located on the diagonal of Σ. Based on a decomposition such as the QR factorization or the SVD, direct regularization methods can be seen as a spectral filter acting on the singular spectrum, diminishing the deterioration of the solution by noise. As an explicit splitting into the signal and noise subspace is impossible, the truncated singular value decomposition (TSVD) [7, 9] is an intuitive approach shrinking the SVD expansion such that the solution will mostly consist of quantities corresponding to the signal part, i.e.,
$$ x_k = \sum_{i=1}^{k} \frac{1}{\sigma_i} \bigl(u_i^Tb \bigr)v_i \quad \text{for}\ k \leq n $$
(1.2)
which is equivalent to solving \(\min_{x} \lVert x \rVert_{2}\) with subject to \(\min_{x} \lVert H_{k}x - b \rVert_{2}\), where the rank deficient and better conditioned coefficient matrix Hk is the closest rank-k approximation to H.
In order to obtain the truncation index k in (1.2), we can use the discrepancy principle [3]: considering a stochastic setting where the error in each component of η has a standard deviation ξ, i.e., b is affected by white noise of magnitude ξ, the expected value of the perturbation norm is \(\lVert\eta \rVert_{2} = \sqrt{n}\xi\). Thus we choose k by
$$ k = \operatorname*{arg\,min}_{k}\lVert Hx_k - b\rVert_2 \leq \nu\sqrt{n}\xi, $$
(1.3)
where ν>1 is a constant safety factor. Note that for this the error norm or the deviation ξ has to be known in advance. If this is not the case, one has to resort to other estimation techniques such as the (discrete) L-curve criterion; see, e.g., [9].

1.2 Tikhonov-Phillips regularization

Instead of (1.1), the Tikhonov-Phillips regularization (TPR) [18, 19] in general form solves
$$ \min_{x} \bigl\lbrace \Vert Hx_{\alpha} - b \Vert _2^2 + \alpha^2 \Vert Lx_{\alpha} \Vert _2^2 \bigr\rbrace \quad {\Leftrightarrow}\quad \bigl(H^T H+\alpha^2 L^TL \bigr)x_{\alpha} = H^Tb, $$
(1.4)
where L∈ℝl×n, ln is called the regularization matrix. The regularization parameter α≥0 is to be chosen such that both minimization criteria yield the optimal value together: the computed solution xα should be as close as possible to the original problem and sufficiently regular. Following the discrepancy principle, we can choose α so that
$$ \lVert Hx_{\alpha} - b\rVert_2 = \nu\sqrt{n}\xi. $$
(1.5)
This is a nonlinear equation in α which can be solved, e.g., by using a root finder such as Newton’s method.

Instead of using the 2-norm as a means to control the error in the solution and to obtain regularity, i.e., instead of using L=I in (1.4) leading to standard TPR, another possibility is to use discrete smoothing norms \(\lVert L_{k}x_{\alpha} \rVert_{2}\), where Lk usually is an approximation to the first or second derivative operator [5, 9]. For problems where the exact signal x is smooth, the solution of the general formulation (1.4), using a differential operator, will be smoother and thus a more accurate reconstruction. Further recently published approaches are possible: in [12] we incorporate spectral information of H in the regularization matrix being an H-dependent linear operator which acts like the identity in the noise subspace and like the null matrix in the signal subspace, i.e., which acts like a high pass filter in the Fourier context. Noschese and Reichel [16] propose a seminorm for TPR which is constructed by solving \(\min_{L}\lVert Lx_{\alpha}\rVert_{2}\) for a prescribed structure of L.

1.3 (Preconditioned) iterative regularization

Within the class of iterative regularization methods, we focus on (P)CGLS [1] as it is a stable way to implement the CG method on the normal equations HTHx=HTb for least squares problems in the general case. The typical observation, which coincides, e.g., with the CG convergence analysis [3] is that in the first iterations the error is reduced relative to large eigenvalues. In later steps, the eigenspectrum related to noise and small eigenvalues dominates the evolution of the approximate solution. Therefore, the restoration has to stop after a few iterations before the method starts to reduce the error relative to the noise subspace. Similar to the TSVD, we can apply the discrepancy principle and stop the reconstruction after the kth iteration determined by (1.3). Note that following [14], both MINRES and GMRES are not suited for the reconstruction of ill-posed image deblurring problems because they do not suppress the noise contribution sufficiently. Compared to these two methods in general, MR-II [4] and RRGMRES [2] are superior when used to reconstruct ill-posed problems.

For discrete ill-posed problems the application of a preconditioner in iterative regularization methods can easily lead to a deterioration of the reconstruction by approximating the inverse also in the noise subspace, or by removing high frequency components in the original signal. Therefore, an optimal preconditioner should treat the large singular values and act only on the signal part of the singular spectrum but have no effect on the smaller singular values not amplifying the noise. Following [6], such a preconditioner M should have the following properties:
  • on the signal subspace M≈|H|−1 where \(|H|=(H^{T}H)^{\frac{1}{2}}\), and

  • on the noise subspace MI or M≈0.

In [11] we use the probing facility of MSPAI [10] in order to derive a different approximation quality on the signal and noise subspace, respectively, which in some cases leads to faster or better reconstructions. Furthermore, we show that incorporating the variation of the signal data during the construction of preconditioners and the application within iterative methods can result in better or faster reconstructions. Based on this, the present paper motivates a diagonal preconditioner which can be effective for signals which mainly consist of nearly zero components and are weakly blurred by the operator H, i.e., the signal structure is mainly preserved.

The outline of the paper is the following: In Section 2, we introduce a data based diagonal preconditioner to improve the reconstruction of regularization methods by incorporating the values of the (observed) signal. Section 3 contains numerical results applying the proposed approach to deconvolution problems from Regularization Tools [8] and artificial ones constructed on our own using ill-conditioned matrices from Matlab [15]. A conclusion with a short outlook closes the discussion in Section 4.

2 Data based regularization

2.1 Motivation for incorporating the signal data

In many cases the solution vector is not smooth. In such cases the usage of, e.g., TPR with discrete smoothing norms will spoil the reconstruction. In [11] we observed improved reconstructions using preconditioners which act differently around discontinuities of the signal. This can be achieved by weighting those preconditioner components which affect such local effects. An improved reconstruction was observed by applying tridiagonal preconditioners which act like \(\operatorname{tridiag}(1,2,1)\) near continuous components but like \(\operatorname{tridiag}(0,1,0)\) near discontinuities. Hence, it may help to incorporate the behavior of the original signal x or some approximation from previous steps, e.g., the observed data vector b. However, we want to point out that the idea of estimating the signal structure from the data may not work for inverse problems where these two domains are fundamentally different, e.g., for tomography problems.

For signals which consist of nearly zero components and are only weakly blurred, the incorporation of the signal data in form of a diagonal matrix leads to better reconstruction results. A motivation for the effectiveness is the following. Assume
$$ D := \operatorname{diag} \bigl\lbrack(x + \theta)_1,\ldots,(x + \theta)_n \bigr\rbrack $$
(2.1)
is the diagonal preconditioner built from an approximation x+θ of x, where x is the original (exact) signal and θ is some deviation from x. Note that x+θ can, but must not be the observed right-hand side b. The computed solution \(\tilde{x}\), i.e., the reconstruction, can be written as \(\tilde{x} = x+D\delta\), with the deviation .

Theorem 2.1

Assuming (x+θ)j=1,…,n≠0 andΣ>0, the component-wise relative error with respect to the approximationx+θof the reconstruction\(\tilde{x}\)of the unregularized equation\(H^{T}H\tilde{x} = H^{T}b\)satisfies
$$ \biggl\vert\frac{(\tilde{x}-x)_j}{(x+\theta)_j} \biggr\vert= \bigl\vert \bigl(U \varSigma^{-1}V^T \eta \bigr)_j \bigr\vert, $$
whereUΣVTis the spectral decomposition of DHT.

Proof

https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equc_HTML.gif
Using the identity D−1(x+θ)=:1 and the SVD of DHT=UΣVT, we obtain
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equ7_HTML.gif
(2.2)
Therefore, using the deviation (2.2), the computed solution is \(\tilde{x} = x + D(U\varSigma^{-1}V^{T}\eta)\). With the component-wise consideration
$$ (\tilde{x}-x)_j = D_j \bigl(U\varSigma^{-1}V^T \eta \bigr)_j = (x+\theta)_j \bigl(U \varSigma^{-1}V^T \eta \bigr)_j, $$
we receive the relative error with respect to the approximation x+θ as
$$ \biggl\vert\frac{(\tilde{x}-x)_j}{(x+\theta)_j} \biggr\vert= \bigl\vert \bigl(U \varSigma^{-1}V^T \eta \bigr)_j \bigr\vert. $$
 □

Hence, for a solution \(\tilde{x} \neq0\), the relative error is in the order of the underlying data noise η, if the elements of Σ−1 are not arbitrary large, i.e., \(\varSigma^{-1} \in\mathcal{O}(1)\).

As the elements of Σ−1 usually can be arbitrary large, we get rid of the noise contribution by truncating the small singular values in the original spectral decomposition \(\varSigma= \operatorname{diag}(\sigma_{1},\ldots,\sigma_{n})\) and denote
$$ \varSigma_k := \operatorname{diag}(\sigma_1,\ldots, \sigma_k,0,\ldots,0) \in\mathbb{R}^{n \times n}. $$
Furthermore, we denote the pseudoinverse of Σk as
$$ \varSigma_k^{\dagger} := \operatorname{diag} \bigl( \sigma_1^{-1}, \ldots,\sigma_k^{-1},0, \ldots,0 \bigr) \in \mathbb{R}^{n \times n}. $$

Theorem 2.2

After truncating the small singular values, corresponding to noise, the regularized solution of\((\mathit{DH}^{T}\mathit{HD})D^{-1}\tilde{x} = \mathit{DH}^{T}b\)is given by\(D^{-1}\tilde{x} = U\varSigma_{k}^{\dagger}V^{T}b\)and bounded by the underlying data noiseηvia
$$ \bigl\lVert D^{-1}(\tilde{x} - x)\bigr\rVert_2 \leq\frac{1}{\sigma_k} \lVert\eta\rVert_2 + e_{\mathrm{reg}}\quad\text{\textit{and}}\quad\bigl\lVert D^{-1}(\tilde{x} - x)\bigr\rVert_2^2 \leq \frac{1}{\sigma_k^2}\lVert\eta\rVert_2^2 + e_{\mathrm{reg}}^2, $$
whereeregdenotes the regularization error.

Proof

Replacing Σ−1 by \(\varSigma_{k}^{\dagger}\) results in the solution
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equi_HTML.gif
Hence, with U:=(U1,U2)T, we obtain
$$ D^{-1}(\tilde{x} - x) = U\varSigma_k^{\dagger}V^T \eta- U\left ( \begin{array}{c|c} 0 & 0 \\[-1pt] \hline\noalign{\vspace{-2pt}} 0 & I_{n-k} \\ \end{array} \right ) \left ( \begin{array}{c} U_1^T \\[0.15em] \hline\\[-1.0em] U_2^T \\ \end{array} \right ) D^{-1}x $$
(2.3)
which, by passing to norms, can be written as
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equ9_HTML.gif
(2.4)
Finally, (2.4) can be bounded by
$$ \bigl\lVert D^{-1}(\tilde{x} - x)\bigr\rVert_2^2 \leq \frac{1}{\sigma_k^2}\lVert\eta\rVert_2^2 + e_{\mathrm{reg}}^2. $$
 □

An immediate consequence of Theorem 2.2 is

Corollary 2.1

After truncating the small singular values, the component-wise relative error of the solution\(\tilde{x}\)with respect to the approximationx+θis bounded by
$$ \biggl\vert\frac{\tilde{x}_j - x_j}{(x + \theta)_j} \biggr\vert \leq \frac{n}{\sigma_k}\max\vert \eta_j \vert+ \bigl\vert \bigl(U_2 U_2^T D^{-1}x \bigr)_j \bigr\vert . $$

Proof

Using (2.1) and passing (2.3) to components, we obtain
$$ \biggl\vert\frac{\tilde{x}_j - x_j}{(x + \theta)_j} \biggr\vert\leq \bigl| \bigl(U\varSigma_k^{\dagger}V^T \eta \bigr)_j \bigr| + \bigl\vert \bigl(U_2U_2^TD^{-1}x \bigr)_j \bigr\vert $$
in which the component-wise perturbation error can be bounded by
$$ \bigl| \bigl(U\varSigma_k^{\dagger}V^T\eta \bigr)_j \bigr| \leq \bigl\lVert U\varSigma_k^{\dagger}V^T \eta\bigr\rVert_{\infty} \leq\frac{n}{\sigma_k}\lVert \eta\rVert_{\infty} = \frac{n}{\sigma_k}\max\vert\eta_j \vert. $$
 □
As a heuristic, let us ignore the regularization error ereg in Theorem 2.2 and observe the following cases where σk(H) and σk(DHT) denote the kth singular value of H and DHT, respectively:
  1. 1.
    For D=I and ignoring ereg, we get the absolute errors
    https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equn_HTML.gif
     
  2. 2.
    For D according to (2.1), we obtain the relative error with respect to x+θ as
    https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equo_HTML.gif
     
  3. 3.

    For general D, we obtain an arbitrary weighting possibility.

     
Consequently, our heuristic taking the signal values into account weights components differently according to their size, i.e., different components of different size in x become of similar importance in contrast to standard regularization with D=I. General D allows general weighting to enforce reconstruction for certain sections of the signal vector.

2.2 Data based regularization methods

Using the given signal x or b, we construct the data based preconditioner D∈ℝn×n as a diagonal matrix with entries
$$ (D_x)_{ii} := |x_i| + \varepsilon \quad \text{and} \quad(D_b)_{ii} := |b_i| + \varepsilon , $$
(2.5)
respectively. The parameter ε is mandatory and should be chosen ε≪1 or \(\varepsilon \in\mathcal{O}(\eta)\) if bi=0, to guarantee the non-singularity of Db. For bi≠0, we can choose ε=0. The same holds for Dx.
Following our numerical experiments, we receive best results incorporating the signal values only once. Arbitrary weighting with Dr, where \(r \notin\bigl\lbrack\frac{1}{2},2 \bigr\rbrack\) usually leads to less accurate approximations while still improving the reconstruction. For the TSVD the decomposition \(\mathit{HD}^{\frac{1}{2}} = U\varSigma V^{T}\) leads to the modified ill-posed problem
$$ \bigl(D^{\frac{1}{2}}H^T\mathit{HD}^{\frac{1}{2}} \bigr)D^{-{\frac{1}{2}}}x = D^{\frac{1}{2}}H^Tb\quad{\Leftrightarrow}\quad x = D^{\frac{1}{2}}V \varSigma^{-1}U^Tb. $$
(2.6)
For TPR we use the direct application of D to define an appropriate norm \(\lVert x \rVert_{D}^{2} = x^{T}D^{T}Dx\). Using \(D^{\frac{1}{2}}\), we obtain a modified TPR
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equ12_HTML.gif
(2.7)
Note the connection of the preconditioned system \(D^{\frac {1}{2}}H^{T}\mathit{HD}^{\frac{1}{2}}\) in (2.6) to the generalized SVD of the matrix pair \((H,D^{-\frac{1}{2}})\) related to the seminorm \(\lVert x \rVert_{D^{-\frac{1}{2}}}\). Accordingly, we use split preconditioning with \(D^{\frac{1}{2}}\) in PCGLS.

2.3 Improvement by outer iterations

It is possible to further improve on a first computed solution and start an iterative process of building diagonal preconditioners based on the current reconstruction. Following our regularization with outer iterations (ROI) algorithm, we either use b or a first reconstruction from an unpreconditioned regularization method as initial solution \(\tilde{x}^{(0)}\). For a given number of steps we construct \(D_{\tilde{x}^{(s)}}\) from a previously computed reconstruction \(\tilde{x}^{(s-1)}\) and compute a new solution \(\tilde{x}^{(s)}\). As this approach can be applied both to direct and iterative regularization methods we refer to any method presented in Section 1 by using the identifier \(\mathit{Regularization\_method}\). Note that, in general, for every call of the \(\mathit{Regularization\_method}\), there must precede a method for the estimation of the optimal regularization parameter.

https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Fig1_HTML.gif
ROI

Improving the reconstruction by using the preconditioner \(D_{\tilde{x}^{(s)}}\) in a regularization method embedded in outer iterations

In general, the solution improves in the first steps. As in later steps the error saturates and the reconstruction does not change evidently, it is reasonable to perform only a few number of iterations. In most cases, it is sufficient to choose steps ∈[1,5]. Unfortunately, the class of discrete ill-posed problems for which this approach yields improved reconstructions can not be classified exactly. The prerequisite of data based regularization is making the method sensible or even spoiling the solution for problems which do not clearly satisfy the requested property. It turned out, that providing a heuristic stopping criterion for the outer iterations can be a resort for this problem. Following the discrepancy principle [3], we can stop the outer iterative process before reaching the maximum number of given steps if the current residual norm \(\lVert r^{(s)}\rVert_{2}\) drops below the expected value of the perturbation norm.

3 Numerical results

The setting of our experiments is the following:
  • We focus on weakly distorting operators from Regularization Tools [8] and Matlab [15], i.e., we consider deconvolution problems.

  • We assume that the entries of η are random numbers resulting from the same Gaussian distribution with zero mean and standard deviation ξ∈ℝ. Hence, we use white noise of different magnitude.

  • We perform all computations on normalized values and measure the quality of the reconstructions via the relative reconstruction error (RRE) \(\| x - \tilde{x}\|_{2} / \| x \|_{2}\), where \(\tilde{x}\) denotes the reconstruction and x the exact solution.

  • We obtain the regularization parameter using the discrepancy principle, i.e., we use a zero finder for (1.5) to determine α in TPR and (1.3) to determine k in TSVD and in (P)CGLS. In all cases we choose ν=1.1.

  • Note that we use the discrepancy principle twice: once to stop the outer iterations in algorithm ROI and once to estimate the regularization parameter at a given ROI iteration.

  • All results are given by the RRE followed by the estimated regularization parameter in brackets. Due to the relationship between regularization using a seminorm and regularization of a preconditioned system (see, e.g., (2.7)), we use the term Preconditioning for all regularization methods in our result tables.

  • We denote reconstruction with standard regularization by using I as preconditioner (no preconditioner).

  • We use ε=10−8 for \(D_{\tilde{x}}\), where \(\tilde{x}^{(0)} = b\) is the initial solution for ROI. We fix the maximum number of outer iterations to steps=5.

  • For data based regularization with TSVD, we use the SVD of \(\mathit{HD}^{\frac{1}{2}}\), compute the solution \(\tilde{x}_{k}\) via the routine tsvd from [8], and apply the data based preconditioner to the solution by \(\tilde{x}_{k} \leftarrow D^{\frac{1}{2}}_{\tilde{x}^{(s)}}\tilde{x}_{k}\).

  • To abbreviate our notation, we use Ds instead of \(D^{\frac {1}{2}}_{\tilde{x}^{(s)}}\).

3.1 Example 1: constant positive pulse sequence

We consider the signal x1 of size n=450 which is a pulse rate with constant positive discontinuities after every 100 samples:
$$ (x_1)_{100\cdot i} = 5,\quad i=1,\ldots,4. $$
(3.1)
We blur x1 with the prolate matrix from [15] which has symmetric Toeplitz form and is severely ill-conditioned. Our problem has dimension n=450 and is positive definite as we choose w=0.34, i.e., we invoke gallery(’prolate’, 450,0.34).
Following Table 1, we obtain significantly improved reconstructions among all data based regularization methods and chosen magnitudes of noise. Here, the discrepancy principle stops ROI usually after one outer iteration. Figure 1(a) gives the semiconvergence of (P)CGLS. Using data based preconditioning, we observe flattened convergence curves at a lower RRE. Note the difference between the estimated regularization parameter using the discrepancy principle and the true optimum. Figure 1(b) illustrates that the reconstruction is also improved around the discontinuities. At each of them, we obtain a RRE of ∼33 % for I, ∼11 % for D1, ∼3.5 % for D3, and ∼2.9 % for D5.
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Fig2_HTML.gif
Fig. 1

Applying ROI to the gallery(’prolate’, 450,0.34) [15] operator and signal x1 (3.1) affected with ξ=1 %. Subfigure (a) gives the (P)CGLS semiconvergence. Subfigure (b) shows the absolute TPR error for the signal components (x1)85,…,(x1)115, i.e., around the discontinuity at (x1)100

Table 1

RRE for ROI for the gallery(’prolate’, 450,0.34) [15] operator and signal x1 (3.1) for noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number s gives the step after which ROI is stopped when using the discrepancy principle https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_IEq47_HTML.gif

Method

Precond.

Noise level

0.1 %

1 %

5 %

10 %

TPR

I

0.559 (0.047)

0.562 (0.165)

0.576 (0.400)

0.613 (0.620)

D1

0.134 (0.057)

0.176 (0.243)

0.273 (0.623)

0.384 (0.973)

D3

0.010 (0.091) \({}^{\ast_{1}}\)

0.038 (0.296) \({}^{\ast_{1}}\)

0.150 (0.662) \({}^{\ast_{2}}\)

0.287 (0.936) \({}^{\ast_{1}}\)

D5

0.003 (0.095) \({}^{\ast_{1}}\)

0.029 (0.300) \({}^{\ast_{1}}\)

0.139 (0.670) \({}^{\ast_{2}}\)

0.276 (0.947) \({}^{\ast_{1}}\)

TSVD

I

0.560 (308)

0.565 (306)

0.579 (291)

0.618 (260)

D1

0.135 (294)

0.177 (50)

0.267 (4)

0.269 (4)

D3

0.010 (12) \({}^{\ast_{1}}\)

0.026 (4) \({}^{\ast_{1}}\)

0.054 (4) \({}^{\ast_{1}}\)

0.057 (4) \({}^{\ast_{1}}\)

D5

0.002 (4) \({}^{\ast_{1}}\)

0.005 (4) \({}^{\ast_{1}}\)

0.012 (4) \({}^{\ast_{1}}\)

0.018 (4) \({}^{\ast_{1}}\)

(P)CGLS

I

0.560 (7)

0.564 (2)

0.567 (1)

0.570 (1)

D1

0.134 (15)

0.157 (4)

0.245 (1)

0.248 (1)

D3

0.009 (3) \({}^{\ast_{1}}\)

0.023 (1) \({}^{\ast_{1}}\)

0.048 (1) \({}^{\ast_{1}}\)

0.062 (1) \({}^{\ast_{1}}\)

D5

0.002 (1) \({}^{\ast_{1}}\)

0.007 (1) \({}^{\ast_{1}}\)

0.034 (1) \({}^{\ast_{1}}\)

0.064 (1) \({}^{\ast_{1}}\)

3.2 Example 2: 2D Gaussian blur on problem blur

As a second example, we consider the test problem blur [8] which is deblurring images degraded by atmospheric turbulence blur. The matrix H is an n2×n2 symmetric, doubly block Toeplitz matrix that models blurring of an n×n image by an isotropic Gaussian point-spread function. The parameter σ controls the width of H and thus the amount of smoothing and ill-posedness. H is symmetric block banded and possibly positive definite depending on n and σ. We choose \(H \in\mathbb{R}^{20^{2} \times20^{2}}\), band=3, and σ=2, i.e., we invoke blur(20,3,2). We refer to the stacked right-hand side as x2.

Table 2 gives the results: for noise of small magnitude, we still observe large improvement when using data based regularization; cf. ξ=0.1 % or Figs. 2(c) and (d). For large noise, only weak improvement is gained and the reconstruction is getting spoiled along further outer iterations; cf. ξ=10 % in Table 2. However, in this case, the discrepancy principle prevents from spoiling the solution after the first outer iteration. Note that for ξ=1 %, ROI using TPR is not stopped after 5 steps. Following Fig. 2, the reconstruction is also improved for nonzero components in the signal: see, e.g., the cross in (b) and (c). Similar to Example 1, we observe flattened and improved convergence curves of (P)CGLS in Fig. 3. Note that the optimum using D1 leads to a better reconstruction than for D3 for noise level 1 %.
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Fig3_HTML.gif
Fig. 2

TPR of the blur(20,3,2) [8] problem where ξ=0.1 % in (a)–(c) and ξ≈0 % in (d)

https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Fig4_HTML.gif
Fig. 3

Applying ROI with (P)CGLS to the blur(20,3,2) [8] problem for two different levels of noise

Table 2

RRE using ROI for the blur(20,3,2) [8] problem and noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number s gives the step after which the outer iteration is stopped when using the discrepancy principle

Method

Precond.

Noise level

0.1 %

1 %

5 %

10 %

TPR

I

0.228 (0.007)

0.358 (0.038)

0.516 (0.141)

0.579 (0.235)

D1

0.143 (0.005)

0.323 (0.030)

0.518 (0.133)

0.573 (0.229)

D3

0.081 (0.019) \({}^{\ast_{1}}\)

0.269 (0.077)

0.470 (0.189) \({}^{\ast_{1}}\)

0.549 (0.288)

D5

0.074 (0.024) \({}^{\ast_{1}}\)

0.243 (0.083)

0.446 (0.191) \({}^{\ast_{1}}\)

0.547 (0.2851) \({}^{\ast_{3}}\)

TSVD

I

0.257 (254)

0.398 (114)

0.585 (36)

0.629 (20)

D1

0.176 (191)

0.375 (72)

0.580 (18)

0.605 (3)

D3

0.113 (80) \({}^{\ast_{1}}\)

0.354 (49) \({}^{\ast_{1}}\)

0.575 (18) \({}^{\ast_{1}}\)

0.640 (3) \({}^{\ast_{1}}\)

D5

0.111 (75) \({}^{\ast_{1}}\)

0.341 (24) \({}^{\ast_{1}}\)

0.592 (10) \({}^{\ast_{1}}\)

0.705 (3) \({}^{\ast_{1}}\)

(P)CGLS

I

0.238 (51)

0.371 (12)

0.553 (3)

0.579 (2)

D1

0.150 (57)

0.335 (12)

0.541 (3)

0.574 (2)

D3

0.095 (21) \({}^{\ast_{1}}\)

0.291 (6) \({}^{\ast_{1}}\)

0.515 (3) \({}^{\ast_{1}}\)

0.608 (2) \({}^{\ast_{1}}\)

D5

0.094 (14) \({}^{\ast_{1}}\)

0.270 (6) \({}^{\ast_{1}}\)

0.508 (3) \({}^{\ast_{1}}\)

0.584 (3) \({}^{\ast_{1}}\)

3.3 Example 3: 1D Gaussian blur on signal with positive and negative discontinuities

We analyze the effect for a signal of dimension n=450 containing positive and negative discontinuities at
https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_Equp_HTML.gif
The signal x3 is blurred by the 1D Gaussian operator blur1D with band=4 and σ=3. Here, blur1D is the 1D analogy of the 2D blur operator taken from [8].
Following Table 3, data based regularization strongly improves the reconstruction similar to Example 1. Except for ξ=5 % using TPR, the discrepancy principle stops ROI after one outer iteration, although more improvement is achievable along further outer iterations.
Table 3

RRE using ROI for the blur1D(450,4,3) problem and signal x3 for noise of different magnitude. Note that the values marked with an asterisk \({}^{\ast_{s}}\) are obtained only if the discrepancy principle in ROI is switched off. The number s gives the step after which the outer iteration is stopped when using the discrepancy principle https://static-content.springer.com/image/art%3A10.1007%2Fs10543-013-0421-9/MediaObjects/10543_2013_421_IEq96_HTML.gif

Method

Precond.

Noise level

0.1 %

1 %

5 %

10 %

TPR

I

0.131 (0.021)

0.267 (0.107)

0.452 (0.321)

0.601 (0.575)

D1

0.005 (0.056)

0.048 (0.186)

0.221 (0.490)

0.404 (0.840)

D3

0.003 (0.092) \({}^{\ast_{1}}\)

0.026 (0.291) \({}^{\ast_{2}}\)

0.128 (0.645)

0.248 (0.904) \({}^{\ast_{1}}\)

D5

0.002 (0.098) \({}^{\ast_{1}}\)

0.019 (0.310) \({}^{\ast_{2}}\)

0.095 (0.688)\({}^{\ast_{4}}\)

0.186 (0.966) \({}^{\ast_{1}}\)

TSVD

I

0.175 (440)

0.323 (397)

0.543 (324)

0.689 (264)

D1

0.002 (56)

0.041 (97)

0.193 (104)

0.385 (103)

D3

0.002 (8) \({}^{\ast_{1}}\)

0.027 (10) \({}^{\ast_{1}}\)

0.152 (10) \({}^{\ast_{1}}\)

0.313 (16) \({}^{\ast_{1}}\)

D5

0.001 (8) \({}^{\ast_{1}}\)

0.025 (8) \({}^{\ast_{1}}\)

0.137 (8) \({}^{\ast_{1}}\)

0.285 (8) \({}^{\ast_{1}}\)

(P)CGLS

I

0.152 (185)

0.293 (43)

0.461 (16)

0.573 (10)

D1

0.003 (33)

0.035 (19)

0.172 (11)

0.326 (9)

D3

0.002 (6) \({}^{\ast_{1}}\)

0.023 (6) \({}^{\ast_{1}}\)

0.128 (4) \({}^{\ast_{1}}\)

0.262 (3) \({}^{\ast_{1}}\)

D5

0.001 (6) \({}^{\ast_{1}}\)

0.018 (6) \({}^{\ast_{1}}\)

0.101 (4) \({}^{\ast_{1}}\)

0.205 (4) \({}^{\ast_{1}}\)

3.4 Remarks

  1. 1.

    We also compared data based preconditioning when applied in PCGLS, PMINRES [17], and PMR-II [4]. E.g., for the problem blur(32,4,8) from [8] or the ill-conditioned matrix gallery(’gcdmat’) from [15] using a point symmetric signal with a positive and negative sawtooth around \(\frac{n}{2}\), we receive similar results among the selected Krylov methods and obtain improved reconstructions compared to standard regularization without preconditioner.

     
  2. 2.

    If ξ=0 %, we always observe a strong improvement in the reconstruction, even for more general problems. Hence, the less b is affected by noise, the larger the improvement will be when data based regularization is used.

     
  3. 3.

    When using Dx (2.5), i.e., the exact signal x(0)=x in ROI, we observe improvement even for more general signals and distorting operators. In fact, this was our initial motivation to analyze the effect for data based regularization using x(0)=b.

     
  4. 4.

    For signals not containing zero components, e.g., a constantly shifted pulse sequence corresponding to (3.1), we observe similar reconstruction quality between standard and data based regularization, i.e., we do not obtain improvement.

     
  5. 5.

    For certain examples, data based regularization may yield improvement although the signal structure is not preserved. E.g., for the problem wing by G. M. Wing from [8] where the signal is a positive flank. Applying TPR or TSVD, we observe that a single application of D1 slightly degrades the solution but improves it along further iterations up to a factor of 2. However, the discrepancy principle in ROI does not prevent from spoiling the reconstruction again after a couple of outer iterations.

     
  6. 6.

    In [13] we combine data based regularization with smoothing norms in TPR by proposing the regularization matrix \(L_{k}D^{-1}_{\tilde{x}}\), k=1,2, observing two effects: First, improvement for problems suitable for data based regularization. Second, for smooth signals where discrete smoothing norms yield significant improvement this approach may additionally improve the reconstruction.

     

4 Conclusion

We considered the impact of taking the data of the observed right-hand side into account to improve on the reconstruction when using regularization methods to solve discrete deconvolution problems. As classical regularization methods we used Tikhonov-Phillips regularization and the TSVD while in the class of iterative methods we focused on (P)CGLS to compute a regular solution.

In case of model problems where H preserves the shape of the original signal, i.e., performs a weak blurring, and the right-hand side contains many nearly zero components, the usage of data based regularization will improve the reconstruction, especially when the perturbation in the right-hand side is of small order and discontinuities are available. Here, incorporating the signal values in a diagonal matrix D is favorable concerning its cheap construction and inversion. Further improvement can be achieved by iteratively applying D to the solution iterates. As a heuristic stopping rule we used the discrepancy principle which, for some examples, showed pessimistic behavior, i.e., the outer iteration stopped although it would have been possible to further improve on the solution.

Acknowledgements

The authors thank the referees and P.C. Hansen for the useful suggestions which helped to establish a condensed and improved presentation of the manuscript.

Copyright information

© Springer Science+Business Media Dordrecht 2013