Multidirectional Subspace Expansion for One-Parameter and Multiparameter Tikhonov Regularization

Tikhonov regularization is a popular method to approximate solutions of linear discrete ill-posed problems when the observed or measured data is contaminated by noise. Multiparameter Tikhonov regularization may improve the quality of the computed approximate solutions. We propose a new iterative method for large-scale multiparameter Tikhonov regularization with general regularization operators based on a multidirectional subspace expansion. The multidirectional subspace expansion may be combined with subspace truncation to avoid excessive growth of the search space. Furthermore, we introduce a simple and effective parameter selection strategy based on the discrepancy principle and related to perturbation results.


Introduction
We consider one-parameter and multiparameter Tikhonov regularization problems of the form argmin where • denotes the 2-norm and the superscript i is used as an index.We focus on largescale discrete ill-posed problems such as the discretization of Fredholm integral equations of the first kind.More precisely, assume A is an ill-conditioned or even singular m × n matrix with m ≥ n, L i are p i × n matrices such that the nullspaces of A and L i intersect trivially, and μ i are nonnegative regularization parameters.Furthermore, assume b is contaminated by an error e and satisfies b = Ax + e, where x is the exact solution.Finally, we assume that a bound e ≤ ε is available, so that the discrepancy principle can be used.
In one-parameter Tikhonov regularization ( = 1), the choice of the regularization operator is typically significant, since frequencies in the nullspace of the operator remain unpenalized.Multiparameter Tikhonov can be used when a satisfactory choice of the regularization operator is unknown in advance, or can be seen as an attempt to combine the strengths of different regularization operators.In some applications, using more than one regularization operator and parameter allows for more accurate solutions [1,2,17,20].
Solving (1) for large-scale problems may be challenging.In case the μ i are fixed a priori, methods such as LSQR [21] or LSMR [4] may be used.However, the problem becomes more complicated when the regularization parameters are not fixed in advance [12,15,17].In this paper, we present a new subspace method consisting of three phases; a new expansion phase, a new extraction phase, and a new truncation phase.To be more specific, let X k ⊂ R n be a subspace of dimension k n, and let the columns of X k form an orthonormal basis for X k .Then we can compute matrix decompositions where U k+1 and V i k are have orthonormal columns, βu 1 = b, β = b , H k is a (k + 1) × k Hessenberg matrix, and K i k is upper triangular.Denote µ = (μ 1 , . . ., μ ) for convenience.Now restrict the solution space to X k so that x k (µ) = X k c k (µ), where The vector e 1 is the first standard basis vector of appropriate dimension.Our paper has three contributions.First, a new expansion phase where we add multiple search directions to X k .Second, a new truncation phase which removes unwanted new search directions.Third, a new method for selecting the regularization parameters μ i k in the extraction phase.The three phases work alongside each other: the intermediate solution obtained in the extraction phase is preserved in the truncation phase, whereas the remaining perpendicular component(s) from the expansion phase are removed.
The paper is organized as follows.In Sect. 2 an existing nonlinear subspace method is discussed, whereafter we propose the new multidirectional subspace expansion of the expansion phase.Discussion of the truncation phase follows immediately.Section 3 is focused on discrepancy principle based parameter selection for one-parameter regularization.New lower and upper bounds on the regularization parameter are provided.Sections 4 and 5 describe the extraction phase.In the former, a straightforward parameter selection strategy for multiparameter regularization is given, in the latter, a justification using perturbation analysis.Numerical experiments are performed in Sect.6 and demonstrate the competitiveness of our new method.We end with concluding remarks in Sect.7.

Subspace Expansion for Multiparameter Tikhonov
Let us first consider one-parameter Tikhonov regularization with a general regularization operator.Then = 1 and we write μ = μ 1 , L = L 1 , and When L = I we use the Golub-Kahan-Lanczos bidiagonalization procedure to generate the Krylov subspace In this case H k is lower bidiagonal and K k is the identity and If L = I one can still try to use the above Krylov subspace [12], however, it may be more natural to consider a shift-independent generalized Krylov subspace of the form This generalized Krylov subspace was first studied by Li and Ye [18] and later by Reichel et al. [23].An orthonormal basis can be created with a generalization of Golub-Kahan-Lanczos bidiagonalization [13].However, while the search space grows linearly as a function of the number of matrix-vector products, the dimension of the generalized Krylov subspace grows exponentially as a function of the total degree of a bivariate matrix polynomial.As a result, if we take any vector x ∈ K k (A * A, L * L , A * b) and write it as p(A * A, L * L)A * b, where p is a bivariate polynomial, then p has at most degree log 2 k .This low degree may be undesirable especially for small regularization parameters μ.Reichel and Yu [24,25] solve this in part with algorithms that can prioritize one operator over the other.For instance, if w is a vector in a group j and B has priority over A, then group j + 1 contains (A * A)w, (B * B)w, (B * B) 2 w, …, (B * B) ρ w.The downside is that ρ is a user defined constant, and that the expansion vectors are not necessarily optimal.
An alternative approach is a greedy nonlinear method described by Lampe et al. [17].We briefly review their method and state a straightforward extension to multiparameter Tikhonov regularization.First note that the low-dimensional minimization in (3) simplifies to in the one-parameter case.Next, compute a value μ = μ k using, e.g., the discrepancy principle.It is easy to verify that is perpendicular to X k , as well as the gradient of the cost function in the point x k (μ k ).Therefore, this vector is used to expand the search space.As usual, expansion and extraction are repeated until suitable stopping criteria are met.As previously stated, Lampe et al. [17] consider only one-parameter Tikhonov regularization, however, their method readily extends to multiparameter Tikhonov regularization.Again, the first step is to decide on regularization parameters µ k .Next, use the residual of the normal equations to expand the search space.Note that the residual is again orthogonal to X k as well as the gradient of the cost function We summarize this multiparameter method in Algorithm 1, but remark that in practice we initially use Golub-Kahan-Lanczos bidiagonalization until a µ k can be found that satisfies the discrepancy principle.
Suitable regularization operators often depend on the problem and its solution.Multiparameter regularization may be used when a priori information is lacking.In this case, it is not obvious that the residual vector above is a "good" expansion vector, in particular if the intermediate regularization parameters µ k are not necessarily accurate.Hence, we propose to remove the dependence on the parameters to some extent by expanding the search space with the vectors separately.Here, we omit A * b as it is already contained in X k .Since we expand the search space in multiple directions, we refer to this expansion as a "multidirectional" subspace expansion.Observe that the previous residual expansion vector is in the span of the multidirectional expansion vectors.
It is unappealing for the search space to grow with + 1 basis vectors per iteration, because the cost of orthogonalization and the cost of solving the projected problems depend on the dimension of the search space.Therefore, we wish to condense the best portions of the multiple directions in a single vector, and use the following approach.First we expand X k with the vectors in (4) and obtain X k+ +1 .Then we compute the decompositions analogously to (2) and determine parameters µ k+1 and the approximate solution c k+ +1 .Next, we compute where Z , P, and Q i orthonormal matrices of the form Here I k is the k × k identity matrix and Z +1 is an orthonormal matrix so that Z +1 c k+1:k+ +1 = γ e 1 for some scalar γ .The matrices P +1 and Q i +1 are computed to make H k+ +1 Z * and K i k+ +1 Z * respectively upper-Hessenberg and upper-triangular again.At this point we can truncate (5) to obtain and truncate Z c k+ +1 to obtain c k+1 so that X k+ +1 c k+ +1 = X k+1 c k+1 .The truncation is expected to keep important components, since the directions removed from X k+ +1 are perpendicular to the current best approximation x k+1 , and also to the previous best approx- If the rotation and truncation are combined in one step, then the computational cost of the method is O(( + 1)(n , which quickly becomes smaller than the (re)orthogonalization cost as k grows.
To illustrate our approach, let us consider a one-parameter Tikhonov example where = 1.First we expand X 1 = x 1 with vectors A * Ax 1 and and use H 1+2 and K 1+2 to compute c 1+2 .We then compute a rotation matrix Z 2 so that Z 2 c 2:3 = ± c 2:3 e 1 , and let Z be defined as in (6).The matrices H 1+2 Z * and K 1+2 Z * are no longer have their original structure, hence, we need to compute orthonormal P and Q such that P H 1+2 Z * is again upper-Hessenberg and Q K 1+2 Z * is upper-triangular.Schematically we have At this point we truncate the subspaces by removing the last columns from X 1+2 Z * , U 2+2 P * , P H 1+2 Z * , V 1+2 Q * , and Q K 1+2 Z * , and the bottom rows of P H 1+2 Z * and Q K 1+2 Z * , to obtain Below we summarize the steps of the new algorithm for solving problem (1).In our implementation we take care to use full reorthogonalization and avoid extending X k , U k+1 , and V i k with numerically linearly dependent vectors.We omit these steps from the pseudocode for brevity.In addition, we initially expand the search space solely with A * u k+1 until the discrepancy principle can be satisfied conform Proposition 1 in Sect.3.
6. Compute P, Q, and 8. Truncate Z c k+ +1 to obtain c k+1 and set x k+1 = X k+1 c k+1 .9. if x k+1 − x k / x k is sufficiently small then break We have completed our discussion of the expansion and truncation phase of our algorithm.In the following section we discuss the extraction phase for one-parameter Tikhonov regularization and discuss the multiparameter case in later sections.

Parameter Selection in Standard Tikhonov
In this section we investigate parameter selection for general form one-parameter Tikhonov, where = 1, μ = μ 1 , and L = L 1 .Multiple methods exist in the one-parameter case to determine particular μ k , including the discrepancy principle, the L-curve criterion and generalized cross validation; see, for example, Hansen [11,Ch. 7].We focus on the discrepancy principle which states that μ k must satisfy where e ≤ ε and η > 1 is a user supplied constant independent of ε.
It is known that root finding methods can find solutions, for example, Lampe et al. [17] compare four of them.We prefer bisection for its reliability and straightforward analysis and implementation.The performance difference is not an issue because root finding requires a fraction of the total computation time and is no bottleneck.A unique solution μ k exists under mild conditions, see for instance [3].Below we give a proof using our own notation.
Assume H k and K k are full rank and let P k Σ k Q * k be the singular value decomposition of Let the singular values be denoted by Now we can express c k (μ) and ϕ as Or alternatively, Observe that P k is a basis for the range of H k and I − P k P * k is the orthogonal projection onto the nullspace N (H * k ) and is sometimes denoted by P N (H * k ) .Furthermore, it can be verified that Proof (See also [3] and references therein).From (9) it follows that ϕ is a rational function with poles μ = −σ 2 j for all σ j > 0, therefore, ϕ is C ∞ on the interval [0, ∞).Additionally, ϕ is a strictly increasing and bounded function on the same interval, since Consequently, there exists a unique Beyond nonnegativity, the proposition above provides little insight on the location of μ k on the real axis, and we would like to have lower and upper bounds.We determine bounds in Proposition 2 and believe the results to be new.Both in practice and for the proof of the subsequent proposition, it is useful to remove nonessential parts of ϕ(μ) and instead work with the function and the quantity Then 0 ≤ ϕ(μ) ≤ ρ, where ρ = P * k e 1 ≤ 1, and η 2 ε 2 satisfies the bounds in Proposition 1 if and only if 0 ≤ ε < ρ, and ϕ(μ where σ min and σ max are as in (8).

Proof
The key of the proof observe that Combining this observation with the definition of ϕ yields Hence, if ε = 0, then μ k = 0 and we are done.Otherwise μ k = 0 and we can divide by ρ, take the reciprocals, and subtract 1 to arrive at So that and the proposition follows.
It is undesirable to work with the inverse of K k when it becomes ill-conditioned.Instead it may be preferred to use the generalized singular value decomposition (GSVD) where P k and Q k have orthogonal columns and Z k is nonsingular.The matrices C k and S k are diagonal with entries 0 The generalized singular values are given by c i /s i and are understood to be infinite when s i = 0.If K k is nonsingular, then the generalized singular values coincide with the singular values of H k K −1 k .See Golub and Van Loan [8, Section 8.7.3] for more information.
Using a similar derivation as before, we can show that and that the new bounds are given by Here μ k is unbounded from above if The bounds in this section can be readily computed and used to implement bisection and the secant method.We consider parameter selection for multiparameter regularization in the following section.

A Multiparameter Selection Strategy
Choosing satisfactory μ i k in multiparameter regularization is more difficult than the corresponding one-parameter problem.See for example [1,2,6,14,16,20,20].In particular, there is no obvious multiparameter extension of the discrepancy principle.Nevertheless, methods based on the discrepancy principle exist and we will discuss three of them.
Brezinski et al. [2] had some success with operators splitting.Substituting This form of the minimization problem suggests the approximation of X * k x by a linear combination [2,Sect. 3] where and Alternatively, Brezinski et al. [2] consider solving where ν i are fixed and obtained from (12).The latter approach provides better results in exchange for an additional QR decomposition.In either case, operator splitting is a straightforward approach, but does not necessarily satisfy the discrepancy principle exactly.Lu and Pereverzyev [19] and later Fornasier et al. [5] rewrite the constrained minimization problem as a differential equation and approximate by a model function m(µ) which admits a straightforward solution to the constructed differential equation.However, it is unclear which µ the method finds and its solution may depend on the initial guess.On the other hand, it is possible to keep all but one parameter fixed and compute a value for the free parameter such that the discrepancy principle is satisfied.This allows one to trace discrepancy hypersurfaces to some extent.Gazzola and Novati [6] describe another interesting method.They start with a oneparameter problem and successively add parameters in a novel way, until each parameter of the full multiparameter problem has a value assigned.Especially in early iterations the discrepancy principle is not satisfied, but the parameters are updated in each iteration so that the norm of the residual is expected to approach ηε.Unfortunately, we observed some issues in our implementation.For example, the quality of the result depends on initial values, as well as the order in which the operators are added (that is, the indexing of the operators).The latter problem is solved by a recently published and improved version of the method [7], which was brought to our attention during the revision of this paper.
We propose a new method that satisfies the discrepancy principle exactly, does not depend on an initial guess, and is independent of the scaling or indexing of the operators.The method uses the operator splitting approach in combination with new weights.Let us omit all k subscripts for the remainder of this section, and suppose μ i = μω i , where ω i are nonnegative, but do not necessarily sum to one, and μ is such that the discrepancy principle is satisfied.Then (3) can be written as Since the goal of regularization is to reduce sensitivity of the solution to noise, we use the weights which bias the regularization parameters in the direction of lower sensitivity with respect to changes in ν i .Here D denotes the (total) derivative with respect to regularization parameter(s), and c i and ν i are defined as before, consequently If for some indices Dc i (ν i ) = 0, then we take a c i (ν i ) as the solution, or replace Dc i (ν i ) by a small positive constant.With this parameter choice, the solution does not depend on the indexing of the operators, nor, up to a constant, on the scaling of A, b, or any of the L i .The former is easy to see; for the latter, let α, γ , and λ i be positive constants, and consider the scaled problem The noisy component of γ b is γ e and γ e ≤ γ ε, hence the new discrepancy bound becomes The bound is satisfied when It may be checked that the weights in ( 14) are indeed proportional to α 2 /(λ i ) 2 , that is There are additional viable choices for ω i , including two smoothed versions of the above: which consider the sensitivity of c i (ν i ) in the range of H and K i respectively.We summarize the new parameter selection in Algorithm 3 below.
1. Use (12) to compute c i and ν i .
Set ω i τ −1 ; or set μ i = ν i and μ j = 0 for j = i.else 3. Let Compute μ in ( 13) such that the discrepancy principle is satisfied.
An interesting property of Algorithm 3 is that, under certain conditions, c(µ( ε)) converges to the unregularized least squares solution as the ε goes to zero.Here H + denotes the Moore-Penrose pseudoinverse and c(0) is the minimum norm solution of the unregularized problem.The following proposition formalizes this observation.

Proposition 3
Assume that H is full rank, H * βe 1 = 0, and that K i is nonsingular for i = 1, … .Let ε and ρ be defined as in Sect.3, let η > 1 be fixed, and suppose that ν i ( ε) and are computed according to Algorithm 3 for all 0 ≤ ε < ρ.Then Proof First note that H * βe 1 = 0 implies that β > 0 and ρ > 0. Since H is full rank, the maps Let ε be restricted to the interval [0, ρ/2] and define which proves the first limit in (15).Furthermore, using the definitions of c i (ν i ( ε)) and Dc i (ν i ( ε)) we find the bounds 0 < ρβ σ min (H ) , which show that the inequality in ( 15) is satisfied.Moreover, the bounds show there exist ω min and ω max such that 123 Now, let K( ε) be the nonsingular matrix satisfying Define the right hand side of the equation above as M, then by Proposition 2, each entry of µ( ε) is bounded from below by 0 and from above by which goes to 0 as ε ↓ 0. Therefore, this proves second limit in (15).
Proposition 3 is related to [9,Thm 3.3.3],where it is shown that the solution of a standard form Tikhonov regularization problem converges to a minimum norm least squares solution when the discrepancy principle is used and the noise converges to zero.
In this section we have discussed a new parameter selection method.In the next section we will look at the effect of perturbations in the parameters on the obtained solutions.

Perturbation Analysis
The goal of regularization is to make reconstruction robust with respect to noise.By extension, a high sensitivity to the regularization parameters is undesirable.Consider a set of perturbed parameters µ k + Δµ; if Δµ is sufficiently small where M and ΔM are defined as Therefore, one might choose µ k to minimize the sensitivity measure To see the connection with the previous section, suppose that µ k = ν i k e i and Δµ = ± Δµ e i , then Thus, larger weights ω i k correspond to smaller lower bounds on M −1 ΔM .Having small lower bounds is desirable, since we show in Propositions 4 and 5 that minimizing M −1 ΔM is equivalent to minimizing upper bounds on the forward and backward errors respectively.

and x = X k c . Assume H k and all K i
k are of full rank and define matrices M and ΔM as in (16).If M and M + ΔM are nonsingular and the Δμ i k are sufficiently small so that M −1 ΔM < 1, then With a little manipulation we obtain Since X k has orthonormal columns, the result of the proposition follows.
One may wonder if it is possible to pick a vector f close to βe 1 such that Or in other words, given perturbed regularization parameters, is there a perturbation of βe 1 such that the optimal approximation to the exact solution is obtained?The following proposition provides a positive answer.
Proposition 5 Under the assumptions of Proposition 4, there exist vectors f and g such that Proof The vector f is easy to derive using the ansatz and for arbitrary v. Indeed, it is easy to verify that the above vector satisfies Here R − * R * is the condition number κ(H k ) and ΔM M −1 = M −1 ΔM , since both M and ΔM are symmetric.This proves the first part of the proposition.
The second part is analogous.In particular, we use the ansatz Observe that g can be rewritten as which concludes the proof.
We have discussed forward and backward error bounds which help motivate our parameter choice.Now that we have investigated each of the three phases of our method, we are ready to show numerical results.

Numerical Experiments
We benchmark our algorithm with problems from Regularization Tools by Hansen [10].Each problem provides an ill-conditioned n × n matrix A, a solution vector x of length n and a corresponding measured vector b.We let n = 1024 and add a noise vector e to b.The entries of e are drawn independently from the standard normal distribution.The noise vector is then scaled such that ε = e equals 0.01 b or 0.05 b for 1 and 5 % noise respectively.We use η = 1.01 for the discrepancy bound in (7).We test the algorithms with 1000 different noise vectors for every triplet A, x , and b and report the median results.
The algorithms terminate when the relative difference between two subsequent approximations is less then 0.01, when x k+1 is (numerically) linear dependent in X k , when both U k+1 and none of the V i k can be expanded, or when a maximum number of iterations is reached.For Algorithm 2 we use a maximum of 20 iterations and for Algorithm 1 a maximum of  ( + 1) × 20 iterations.For the sake of a fair comparison, the algorithms return the best obtained approximations and their iteration numbers.
For each test problem, the tables below list the relative error obtained with Algorithm 1, abbreviated by E od , and Algorithm 2, abbreviated by E md .OD and MD stand for one direction and multidirectional respectively.Also listed are the ratio ρ E of E md to E od and the ratio ρ mv of the number of matrix-vector products.That is, E md E od and ρ mv = #MVs Algorithm 2 #MVs Algorithm 1 Only matrix-vector multiplications with A, A * , L i , and L i * count towards the total number of MVs used by each algorithm.We note, however, that multiplications with L i and L i * are often less costly than multiplications with A and A * .Table 1 lists the results one-parameter Tikhonov regularization, where we used the following regularization operators.The first derivative operator L 1 with stencil [1, −1] for Gravity-3, Heat-5, Heat, and Phillips.The second derivative operator L 2 with stencil The table shows that multidirectional subspace expansion can obtain small improvements in the relative error at the cost of a small number of extra matrix-vector products, especially for 1 % noise.We stress that in these cases, Algorithm 1 is allowed to perform additional MVs, but converges with a higher relative error.If there is no improvement in the relative error, we see that multidirectional subspace expansion can improve convergence, for example, for the Deriv2 problems as well as Foxgood.
Table 2 lists the results for multiparameter Tikhonov regularization.We used the following regularization operators for each problem: the derivative operator L d as listed above, the identity operator I , and the orthogonal projection (I − N d N * d ), where the columns of N d are an orthonormal basis for the nullspace N (L d ).
Overall, we observe larger improvements in the relative error for multidirectional subspace expansion, but also a larger number MVs.We no longer see cases where multidirectional  subspace expansion terminates with fewer MVs.In fact, the relative error is the same for Heat, although more MVs are required.Finally, Fig. 1 illustrates an example of the improved results which can be obtained by using multidirectional subspace expansion.
In the next tests we attempt to reconstruct the original image from a blurred and noisy observation.Consider an n × n grayscale image with pixel values in the interval [0, 1].Then x is a vector of length n 2 obtained by stacking the columns of the image below each other.The matrix A represents a Gaussian blurring operator, generated with blur from Regularization Tools.The matrix A is block-Toeplitz with half-bandwidth band=11 and the amount of blurring is given by the variance sigma=5.The entries of the noise vector e are independently drawn from the standard normal distribution after which the vector is scaled such that ε = E[ e ] = 0.05 b .We take η such that e ≤ ηε in 99.9 % of the cases.That is, For regularization we choose an approximation to the Perona-Malik [22] operator where ρ is a small positive constant.Because L is a nonlinear operator, we first perform a small number of iterations with a finite difference approximation L b of L(b).The resulting intermediate solution x is used for a new approximation L x of L( x).Finally, we run the algorithms a second time with L x and more iterations; see Reichel et al. [23] for more information regarding the implementation of the Perona-Malik operator.
The first test image is also used in [13,23,25], and is shown in Figure 2. We use ρ = 0.075, 20 iterations for the first run, and 100 iterations for the second run.The second image is an image of Saturn, see Figure 3.For this image we use ρ = 0.03, 25 iterations for the first run and 150 iterations for the second run.In both cases we stop the iterations around the point where convergence flattens out, as can be seen from the convergence history in Figure 4.The figure uses the peak signal-to-noise ratio (PSNR) given by −20 log 10 x − x k n versus the iteration number k.A higher PSNR means a higher quality reconstruction.We observe that multidirectional subspace expansion may allow convergence to a more accurate solution.Because multidirectional subspace expansion requires extra matrix-vector products, we investigate the performance in Table 3 and when the PSNR of the output of Algorithm 2 achieves parity with the PSNR of the output of Algorithm 1.There is only a  small difference in the total number of matrix-vector products when parity is achieved, but a large improvement in wall clock time.This improvement is in large part due to the block operations which can only be used Algorithm 2. For reference, the runtimes were obtained on an Intel Core i7-3770 and with MATLAB R2015b on 64-bit Linux 4.2.5.

Conclusions
We have presented a new method for large-scale Tikhonov regularization problems.In accordance with Algorithm 2, the method combines a new multidirectional subspace expansion with optional truncation to produce a higher quality search space.The multidirectional expansion generates a richer search space, whereas the truncation ensures moderate growth.Numerical results illustrate that our method can yield more accurate results or faster convergence.Furthermore, we have presented lower and upper bounds on the regularization parameter when the discrepancy principle is applied to one-parameter regularization.These lower and upper bounds can be used in particular to initiate bisection or the secant method.
In addition, we have introduced a straightforward parameter choice for multiparameter regularization, as summarized by Algorithm 3. The parameter selection satisfies the discrepancy principle, and is based on easy to compute derivatives that are related to the perturbation results of Sect. 5.

Fig. 1
Fig.1baart test matrix with n = 1024 and 1 % noise.The solid line is the exact solution.The dashed line is the solution obtained with multiparameter regularization and the residual subspace expansion (Algorithm 1).The dotted line is the solution obtained with multiparameter regularization and multidirectional subspace expansion (Algorithm 2)

Table 1
One-parameter Tikhonov regularization results

Table 3
The number of matrix-vector products and wall clock time used by the different methods.The results in the upper rows are for Lizards and the results in the lower rows are for Saturn