Flexible GMRES for total variation regularization
- 122 Downloads
Abstract
This paper presents a novel approach to the regularization of linear problems involving total variation (TV) penalization, with a particular emphasis on image deblurring applications. The starting point of the new strategy is an approximation of the non-differentiable TV regularization term by a sequence of quadratic terms, expressed as iteratively reweighted 2-norms of the gradient of the solution. The resulting problem is then reformulated as a Tikhonov regularization problem in standard form, and solved by an efficient Krylov subspace method. Namely, flexible GMRES is considered in order to incorporate new weights into the solution subspace as soon as a new approximate solution is computed. The new method is dubbed TV-FGMRES. Theoretical insight is given, and computational details are carefully unfolded. Numerical experiments and comparisons with other algorithms for TV image deblurring, as well as other algorithms based on Krylov subspace methods, are provided to validate TV-FGMRES.
Keywords
TV regularization Flexible GMRES Smoothing-norm preconditioning Image deblurringMathematics Subject Classification
AMS 65F08 AMS 65F10 AMS 65F221 Introduction
Unfortunately, when considering large-scale problems whose associated coefficient matrix A may not have an exploitable structure or may not be explicitly stored, one cannot assume the GSVD to be available. In this setting iterative regularization methods are the only option, i.e., one can either solve the Tikhonov-regularized problem (1.2) iteratively, or apply an iterative solver to the original system (1.1) and terminate the iterations early (see [4, 11, 16, 18] and the references therein).
This paper considers the last approach, sometimes referred to as “regularizing iterations”, and it focuses on the GMRES method [27, Chapter 6] and some variants thereof. GMRES does not require \(A^T\) nor matrix-vector products with \(A^T\), and therefore it appears computationally attractive when compared to other regularizing Krylov subspace methods such as LSQR [26]. Although GMRES was proven to be a regularization method in [6], it is well known that it may have a poor performance in some situations, e.g., when dealing with highly non-normal linear systems [21]. It has been shown that, however, this issue can be fixed by using specific preconditioners. For instance, the so-called smoothing-norm preconditioned GMRES method derived in [19] (and here referred to as GMRES(L)) follows from transforming the general Tikhonov problem (1.2) into standard form (i.e., into an equivalent Tikhonov problem with \(L=I\)), and then applying GMRES to the transformed fit-to-data term. GMRES(L) can be regarded as a right-preconditioned GMRES method that computes an approximate regularized solution as a linear combination of vectors that incorporate the smoothing effect of the regularization matrix L in (1.2). We emphasize that, here and in the following, the term “preconditioner” is used in a somewhat unconventional way. Indeed, the preconditioners used in this paper aim at computing a good regularized solution to problem (1.1) and, from a Bayesian point of view, they may be regarded as “priorconditioners” [5].
The convex optimization problem (1.3) is very challenging to solve, both because of its large-scale nature, and because of the presence of the non-differentiable total variation term (so that the efficient iterative techniques used to solve problem (1.2) cannot be straightforwardly adopted in this setting). We also mention in passing that the so-called \(\text {TV}_p\) penalization term, which evaluates the magnitude of the gradient with respect to some \(\ell ^p\) “norm”, \(0<p<1\), can be considered instead of the usual \(\text {TV}=\text {TV}_1\), see [7]. \(\text {TV}_p\) is notably more effective in enforcing sparse gradients (as it better approximates the \(\ell ^0\) quasi-norm), but the resulting Tikhonov-like problem is not convex anymore (and therefore may have multiple local minima). A variety of numerical approaches for the solution of (1.3) have already been proposed: some of them are based on fixed-point iterations, smooth approximations of \(\text {TV}(x)\), fast gradient-based iterations, and Bregman-distance methods; see [3, 8, 25, 29], to cite only a few.
This paper is concerned with strategies that stem from the local approximation of (1.3) by a sequence of quadratic problems of the form (1.2), and that exploit Krylov subspace methods to compute solutions thereof. To the best of our knowledge, this idea was first proposed for total variation regularization in [30], where the authors derive the so-called iteratively reweighted norm (IRN) method consisting of the solution of a sequence of penalized weighted least-squares problems with diagonal weighting matrices incorporated into the regularization term and dependent on the previous approximate solution (so that they are updated from one least-squares problem to the next one). For large-scale unstructured problems, this method intrinsically relies on an inner-outer iteration scheme. In the following we use the acronym IRN to indicate a broad class of methods that can be recast in this framework.
Although the IRN method [30] is theoretically well-justified and experimentally effective, it has a couple of drawbacks. Firstly, conjugate gradient is repeatedly applied from scratch to the normal equations associated to each penalized least-squares problem of the form (1.2) in the sequence: this may result in an overall large number of iterations. Secondly, the regularization parameter \(\lambda \) should be chosen (and fixed) in advance. The so-called modified LSQR (MLSQR) method [1] partially remedies both these shortcomings. Although the starting point of MLSQR is still an IRN approach [30], each Tikhonov-regularized problem in the sequence of least-squares problems is transformed into standard form: in this way the matrix A is now right preconditioned and a preconditioned LSQR method can be applied. This approach typically results in a smaller number of iterations with respect to IRN [30]; moreover, different values of the regularization parameter can be easily considered. On the downside, LSQR is still applied sequentially to each IRN least-squares problem, and a new approximation subspace for the LSQR solution is computed from scratch. The so-called GKSpq method [23] leverages generalized Krylov subspaces (GKS), i.e., approximation subspaces where the updated weights and adaptive regularization parameters can be easily incorporated as soon as they become available. In other words, only one approximation subspace is generated when running the GKSpq method for the IRN least-squares problems associated to (1.3), and the approximate solutions are obtained by orthogonal projections onto GKS of increasing dimension. In this way, GKSpq avoids inner-outer iterations and is very efficient when compared to IRN and MLSQR.
All the methods surveyed so far implicitly consider the normal equations associated to least-squares approximations of problem (1.3). As already remarked, approaches based on GMRES applied directly to the fit-to-data term in (1.2) may be more beneficial in some situations, as the computational overload of dealing with \(A^T\) can be avoided. The restarted generalized Arnoldi–Tikhonov (ReSt-GAT) method [15] is arguably the only approach that generates a GMRES-like approximation subspace for the solution of each least-squares problem associated to the IRN strategy. However ReSt-GAT has two shortcomings: it is based on an inner-outer iteration scheme (though approximations recovered during an iteration cycle are carried over to the next one by performing convenient warm restarts) and the TV penalization does not directly affect the approximation subspace of ReSt-GAT (failing to properly enhance piecewise constant reconstructions).
The goal of this paper is to propose a novel strategy that employs GMRES for the solution of Tikhonov-regularized problems associated to the IRN approach to (1.3). In particular, a flexible instance of a GMRES(L)-like method is used to solve preconditioned versions of system (1.1), which are obtained by considering quadratic approximations to problem (1.3), performing transformations into standard form, and applying GMRES to the resulting fit-to-data term. In this way, the effect of the total variation regularization term (defined with respect to iteratively updated weights and a discrete gradient operator) is incorporated into the solution subspace, which is affected by both the null space of the regularization matrix and the adaptive weights. As the weights are updated as soon as a new approximate solution becomes available, i.e., immediately after a new GMRES iteration is computed, the flexible GMRES (FGMRES) method (see [27, Chapter 9]) is employed to handle variable preconditioning along the iterations. The resulting regularization method is dubbed Total-Variation-FGMRES (TV-FGMRES). We emphasize that the TV-FGMRES method is inherently parameter-free, as only one stopping criterion should be set to suitably terminate the iterations (while, for all the other solvers for problem (1.3) listed so far, one has to choose both the parameter \(\lambda \) and the number of iterations). Moreover, the new approach is different from the ReSt-GAT one [15] for two reasons: firstly, the standard GMRES approximation subspaces are modified and, secondly, regularizing iterations are employed rather than solving a sequence of Tikhonov problems (1.2); also, this approach is somewhat analogous to the GKSpq [23] one, but the two methods differ in the computation of the approximation subspaces (recall that the GKSpq ones involve both \(A^T\) and \(\lambda \)).
This paper is organized as follows. Section 2 covers some background material, including the definition of the weighting matrices for the approximation of the total variation regularization term in an IRN fashion, and a well-known procedure for transforming problem (1.2) into standard form. Section 3 describes the new TV-FGMRES method. Section 4 dwells on implementation details. Section 5 contains numerical experiments performed on three different image deblurring test problems. Section 6 presents some concluding remarks and possible future works.
2 IRN, weights, and standard form transformation
- The one-dimensional (1d) case. In a discrete setting with \(v\in \mathbb {R}^N\), \(\text {TV}(v) = \Vert D_\mathrm{1d}v\Vert _1\), whereThe weighting matrix$$\begin{aligned} D_\mathrm{1d}=\left[ \begin{array}{llll} 1 &{}\quad -1 &{}\quad &{}\quad \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \\ &{}\quad &{}\quad 1 &{}\quad -1 \\ \end{array} \right] \in \mathbb {R}^{(N-1)\times N}. \end{aligned}$$(2.3)where both modulus and exponentiation are considered component-wise, is used in practice to approximate the 1-norm. Indeed, for a given v, one can easily see that$$\begin{aligned} W_\mathrm{1d}= W_\mathrm{1d}(D_\mathrm{1d}v) = \mathrm {diag}\left( \left| D_\mathrm{1d}v\right| ^{-1/2} \right) \in \mathbb {R}^{(N-1)\times (N-1)}\,, \end{aligned}$$(2.4)where \([w]_k\) denotes the kth entry of a vector w.$$\begin{aligned} \Vert W_\mathrm{1d}D_\mathrm{1d}v\Vert _2^2=\sum _{k=1}^{N-1}\left| [D_\mathrm{1d}v]_k\right| ^{-1}[D_\mathrm{1d}v]_k^{2}=\sum _{k=1}^{N-1}\left| [D_\mathrm{1d}v]_k\right| =\Vert D_\mathrm{1d}v\Vert _1 =\text {TV}(v) \,, \end{aligned}$$
- The two-dimensional (2d) case. In a discrete setting,where, if \(v\in \mathbb {R}^N\) is obtained by stacking the columns of a 2d array \(V\in \mathbb {R}^{n\times n}\) with \(N=n^2\), the discrete first derivatives in the horizontal and vertical directions are given by$$\begin{aligned} \text {TV}(v) = \left\| \left( (D^\mathrm {h}v)^2+(D^\mathrm {v}v)^2\right) ^{1/2}\right\| _1\,, \end{aligned}$$respectively. Here Open image in new window is the 1d first derivative matrix (2.3) of appropriate size, and I is the identity matrix of size n, so that 2d discrete operators are defined in terms of the corresponding 1d ones (note that here both Open image in new window and I have n columns, i.e., the size of the 2d array V). Deriving an expression for the weights in the discrete 2d setting is less straightforward. Following [30], for a given v, and letting \(\widetilde{N}=n(n-1)\), one takes$$\begin{aligned} D^\mathrm {h}= (D_\mathrm{1d}\otimes I) \in \mathbb {R}^{n(n-1)\times n^2}\,,\qquad D^\mathrm {v}= (I \otimes D_\mathrm{1d}) \in \mathbb {R}^{n(n-1)\times n^2}\,, \end{aligned}$$$$\begin{aligned} D_{{\mathrm{2d}}}= & {} \left[ \begin{array}{c} D^\mathrm {h}\\ D^\mathrm {v}\end{array} \right] \in \mathbb {R}^{2\widetilde{N}\times N}\,,\nonumber \\ \widetilde{W}_{{\mathrm{2d}}}= & {} \widetilde{W}_{{\mathrm{2d}}}(D_{{\mathrm{2d}}}v) =\mathrm {diag}\left( \left( (D^\mathrm {h}v)^2+(D^\mathrm {v}v)^2\right) ^{-1/4} \right) \in \mathbb {R}^{\widetilde{N}\times \widetilde{N}}\,,\nonumber \\ W_{{\mathrm{2d}}}= & {} W_{{\mathrm{2d}}}(D_{{\mathrm{2d}}}v)=\left[ \begin{array}{cc} \widetilde{W}_{{\mathrm{2d}}}&{} 0\\ 0 &{} \widetilde{W}_{{\mathrm{2d}}}\end{array} \right] \in \mathbb {R}^{2\widetilde{N}\times 2\widetilde{N}}\,. \end{aligned}$$(2.5)
In the following, when the distinction between the 1d and the 2d cases can be waived, we will use the simpler notations W for the \(M\times M\) diagonal weighting matrix, and D for the \(M\times N\) first derivative matrix (with \(M=N-1\) in the 1d case, and \(M=2\widetilde{N}\) in the 2d case).
3 TV-preconditioned flexible GMRES
4 Implementation strategies
To devise efficient implementations of the TV-FGMRES method applied to the system (3.5) or (3.6), a number of properties of the involved matrices should be taken into account. In this section we will often use MATLAB-like notations: for instance, we will use a dot to denote a component-wise operation, a colon to access elements in a range of rows or columns of an array, and \(\text {diag}(\cdot )\) to denote a vector of diagonal entries. We will extensively invoke and generalize some of the propositions derived in [19]. We start by proving the following result for system (3.5) (analogous to Theorem 5.1 in [19]).
Theorem 4.1
Proof
4.1 Computations of matrix-vector products with D and \((D^\dagger )^{T}\)
4.2 Computation of matrix-vector products with \(L^\dagger \) and \(L_A^\dagger \)
4.3 Stopping criteria
- Quasi-optimality criterion, which prescribes to select the solution \(x_{L,m^*}\) obtained at the \(m^*\)th iteration such thatWe remark that, although the quasi-optimality criterion requires \(M_{\text {it}}\) iterations to be performed in advance (where \(M_{\text {it}}\) is a selected maximum number of iterations), no additional computational cost per iteration has to be accounted for in order to apply (4.15) (recall the arguments at the beginning of this section).$$\begin{aligned} m^*= \text {arg} \min _{m\le M_{\text {it}}} \text {TV}(x_{L,m+1}-x_{L,m})\,. \end{aligned}$$(4.15)
- Discrepancy principle, which prescribes to stop as soon as an approximation \(x_{L,m}\) is computed such thatwhere \(\theta >1\) is a safety threshold, and \(\epsilon =\Vert e\Vert _2\) is the norm of the noise e affecting the data (1.1). The discrepancy principle is a very popular and well-established stopping criterion that relies on the availability of a good estimate of \(\Vert e\Vert _2\). However, for the TV-FGMRES method, application of the discrepancy principle may significantly increase the cost per iteration, since two additional matrix-vector products with A should be performed: one to compute \(\Vert r_{L,m}\Vert _2\) (which cannot be monitored in reduced dimension, as FGMRES is applied to the left-preconditioned system (3.5) or (3.6)), and one implicit in \(L_A^\dagger \) (to compute \(x_{L,m}\) at each iteration). For this reason, we also propose to consider the:$$\begin{aligned} \Vert b-Ax_{L,m}\Vert _2 = \Vert r_{L,m}\Vert _2\le \theta \epsilon \,, \end{aligned}$$(4.16)
- Preconditioned discrepancy principle, which prescribes to stop as soon as an approximation \(x_{L,m}\) is computed such thatwhere \({\widehat{\epsilon }}\) is the norm of the noise associated to the preconditioned problem, i.e.,$$\begin{aligned} \Vert \widehat{b}-\widehat{A}\bar{x}_{L,m}\Vert _2 = \Vert \widehat{r}_{m-1}\Vert _2 \le \theta {\widehat{\epsilon }}\,, \end{aligned}$$(4.17)Although (4.17) can be monitored at no additional cost per FGMRES iteration by using projected quantities (see (3.10)), the computation of the trace in (4.18) can be prohibitive for large-scale (and possibly matrix-free) problems. We mention that, however, efficient randomized techniques can be used to handle this task (see [28]) and, most importantly, the computation of the trace should be performed only once for a system (3.5) or (3.6) of a given size, and can be done offline.$$\begin{aligned} {\widehat{\epsilon }}= & {} \Vert \widehat{e}\Vert _2=\Vert (D^\dagger )^TP e \Vert _2= \text {trace}(P^TD^\dagger (D^\dagger )^TP) \Vert e\Vert _2\nonumber \\= & {} \text {trace}(P^TD^\dagger (D^\dagger )^TP)\epsilon \,. \end{aligned}$$(4.18)
5 Numerical experiments
Summary of the acronyms denoting various solvers for TV regularization, and markers denoting the various stopping criteria
Solver | Acronym | Reference | Stopping criteria | Marker |
---|---|---|---|---|
Smoothing-norm GMRES with L | GMRES(L) | [19] | (4.15), inner | Diamond |
Restarted generalized AT | ReSt-GAT | [15] | ||
Restarted Golub–Kahan bidiag. | ReSt-GKB | [16] | (4.16), inner | Square |
Fast gradient-based TV | FBTV | [3] | ||
TV-FGMRES for \(\text {TV}\) with \(L^\dagger \) | FGMRES(1) | – | (4.17), inner | Hexagon |
TV-FGMRES for \(\text {TV}_p\) with \(L^\dagger \) | FGMRES(p) | – | ||
TV-FGMRES for \(\text {TV}_p\) with \(\widetilde{L}^\dagger \) | FGMRES(\(\sim p\)) | – |
Example 1
Example 2
We consider the task of restoring the well-known Shepp–Logan phantom of size \(256\times 256\) pixels, affected by a Gaussian blur whose PSF is given by (5.1), with \(\sigma =4\) and \(i,j=-127,\ldots ,127\), and corrupted by Gaussian noise with relative level \(\varepsilon _{\text {rel}}=5\times 10^{-2}\) (see Fig. 8a). In Fig. 7 we plot the values of the relative error (frame (a)) and the total variation (frame (b)) versus the number of iterations for a variety of solvers for (1.3): the layout of this figure is similar to the one of Fig. 4, and 90 iterations are performed for each solver. We can clearly see that, for this test problem, TV-FGMRES is the most effective solver, which attains better accuracy in the least number of iterations. The fast gradient-based method for TV (with a default value \(\lambda =5.4\times 10^{-4}\)) seems quite slow for this problem, and the restarted GKB algorithm (which is basically the restarted GAT method, where Golub–Kahan bidiagonalization is considered instead of the Arnoldi algorithm) rapidly stagnates (with an automatically selected \(\lambda \) stabilizing around \(1.7\times 10^{-2}\)).
Figure 8 displays the phantoms restored when the discrepancy principle (4.16) is satisfied by the GMRES(D), the TV-FGMRES, and the FBTV methods (the latter does not stop within the maximum number of allowed iterations). Relative errors and corresponding iteration numbers are reported in the caption. We can clearly see that the TV-FGMRES solution is the one with lower relative reconstruction error, though the FBTV solution surely appears more blocky (containing also some artifacts). On the opposite, the GMRES(D) solution displays many ringing artifacts, which are partially removed when adaptive weights are incorporated within the TV-FGMRES preconditioners and approximation subspace.
Example 3
We consider the task of restoring the cameraman test image of size \(256\times 256\) pixels, corrupted by the same blur and noise used with the previous example (see Fig. 10a). However, contrarily to the previous example, the total variation of the exact image is quite moderate. In Fig. 9 we plot the values of the relative error (frame (a)) and the total variation (frame (b)) versus the number of iterations for a variety of solvers for (1.3): the layout of this figure is similar to the one of Figs. 4 and 7. Also for this example, 90 iterations are performed for each solver. The best reconstructions computed by the GMRES(D), the TV-FGMRES, and the FBTV methods are displayed in Fig. 10 (relative restoration errors are reported in the caption). For this test problem all the solvers seem to have a similar performance in terms of relative errors (except for ReSt-GAT that exhibits an unstable behavior because of a likely inappropriate choice of the regularization parameter). We also remark that both ReSt-GKB and FBTV are very fast in recovering an approximate solution, whose quality however stagnates. TV-FGMRES seems to recover a more accurate value of the total variation of the approximate solutions along the iterations. Correspondingly, more details are visible in the image restored by TV-FGMRES with respect to the one restored by the FBTV method, which is more blocky (coherently to the fact that FBTV underestimates the total variation of the exact solution).
6 Conclusion and future work
In this paper we presented a novel GMRES-based approach for computing regularized solutions for large-scale linear inverse problems involving TV penalization, with applications to image deblurring problems. By considering an IRN approach to approximate the non-differentiable total variation term, and by exploiting the framework of smoothing-norm preconditioning for GMRES, we could derive the TV-FGMRES method that leverages the flexible Arnoldi algorithm. The TV-FGMRES method easily extends to problems involving \(\text {TV}_p\) regularization, and it is inherently parameter-free and efficient, as various numerical experiments and comparisons with other solvers for total variation regularization show.
Future work includes a more careful investigation of how to optimally derive alternative preconditioners that can speed-up the convergence of LSQR for the computation of the pseudo-inverse \(L^\dagger \) for large-scale problems. Strategies to extend the TV-FGMRES method to incorporate additional penalization terms can be studied as well. Finally, ways of extending TV-FGMRES to handle non-square coefficient matrices can be devised, by exploiting the flexible Golub–Kahan bidiagonalization algorithm derived in [10].
Notes
Acknowledgements
We are grateful to the anonymous Referees for providing insightful suggestions that helped to improve the paper. We would also like to thank James Nagy for insightful discussions about structured matrix computations.
References
- 1.Arridge, S.R., Betcke, M.M., Harhanen, L.: Iterated preconditioned LSQR method for inverse problems on unstructured grids. Inverse Probl. 30(7), 075009 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
- 2.Bauer, F., Gutting, M., Lukas, M.A.: Evaluation of Parameter Choice Methods for Regularization of Ill-Posed Problems in Geomathematics, pp. 1713–1774. Springer, Berlin (2015)Google Scholar
- 3.Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
- 4.Berisha, S., Nagy, J.G.: Iterative image restoration. In: Chellappa, R., Theodoridis, S. (eds.) Academic Press Library in Signal Processing, chap. 7, vol. 4, pp. 193–243. Elsevier, Amsterdam (2014)Google Scholar
- 5.Calvetti, D.: Preconditioned iterative methods for linear discrete ill-posed problems from a Bayesian inversion perspective. J. Comput. Appl. Math. 198(2), 378–395 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
- 6.Calvetti, D., Lewis, B., Reichel, L.: On the regularizing properties of the GMRES method. Numer. Math. 91(4), 605–625 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
- 7.Candés, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
- 8.Chan, T.F., Golub, G.H., Mulet, P.: A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20(6), 1964–1977 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
- 9.Chan, T.F., Shen, J.: Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. SIAM, Philadelphia (2005)CrossRefzbMATHGoogle Scholar
- 10.Chung, J., Gazzola, S.: Flexible Krylov methods for \(\ell ^p\) regularization (2018) (submitted)Google Scholar
- 11.Chung, J., Knepper, S., Nagy, J.G.: Large-scale inverse problems in imaging. In: Scherzer, O. (ed.) Handbook of Mathematical Methods in Imaging, chap. 2, pp. 43–86. Springer, Berlin (2011)CrossRefGoogle Scholar
- 12.Eldén, L.: A weighted pseudoinverse, generalized singular values, and constrained least-squares problems. BIT Numer. Math. 22(4), 487–502 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
- 13.Fong, D.C.L., Saunders, M.A.: LSMR: an iterative algorithm for sparse least-squares problems. SIAM J. Sci. Comput. 33(5), 2950–2971 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
- 14.Gazzola, S., Hansen, P.C., Nagy, J.G.: IR tools: a MATLAB package of iterative regularization methods and large-scale test problems. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-018-0570-7 Google Scholar
- 15.Gazzola, S., Nagy, J.G.: Generalized Arnoldi–Tikhonov method for sparse reconstruction. SIAM J. Sci. Comput. 36(2), B225–B247 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
- 16.Gazzola, S., Novati, P., Russo, M.R.: On Krylov projection methods and Tikhonov regularization. Electron. Trans. Numer. Anal. 44(1), 83–123 (2015)MathSciNetzbMATHGoogle Scholar
- 17.Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins, Baltimore (1996)zbMATHGoogle Scholar
- 18.Hansen, P.C.: Discrete Inverse Problems: Insight and Algorithms. Society for Industrial and Applied Mathematics, Philadelphia (2010)CrossRefzbMATHGoogle Scholar
- 19.Hansen, P.C., Jensen, T.K.: Smoothing-norm preconditioning for regularizing minimum-residual methods. SIAM J. Matrix Anal. Appl. 29(1), 1–14 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
- 20.Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring Images: Matrices, Spectra, and Filtering. Society for Industrial and Applied Mathematics, Philadelphia (2006)CrossRefzbMATHGoogle Scholar
- 21.Jensen, T.K., Hansen, P.C.: Iterative regularization with minimum-residual methods. BIT Numer. Math. 47, 103–120 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
- 22.Kubínová, M., Nagy, J.G.: Robust regression for mixed Poisson–Gaussian model. Numer. Algorithms 79(3), 825–851 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
- 23.Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: A generalized Krylov subspace method for \(\ell _p-\ell _q\) minimization. SIAM J. Sci. Comput. 37, S30–S50 (2015)CrossRefzbMATHGoogle Scholar
- 24.Notay, Y.: Flexible conjugate gradients. SIAM J. Sci. Comput. 22, 1444–1460 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
- 25.Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
- 26.Paige, C.C., Saunders, M.A.: LSQR: an algorithm for sparse linear equations and and sparse least squares. ACM Trans. Math. Softw. 8(1), 43–71 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
- 27.Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia (2003)CrossRefzbMATHGoogle Scholar
- 28.Saibaba, A.K., Alexanderian, A., Ipsen, I.C.F.: Randomized matrix-free trace and log-determinant estimators. Numer. Math. 137(5), 353–395 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
- 29.Vogel, C.R., Oman, M.E.: Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 7(6), 813–824 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
- 30.Wohlberg, B., Rodríguez, P.: An iteratively reweighted norm algorithm for minimization of total variation functionals. IEEE Signal Process. Lett. 14, 948–951 (2007)CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.