The Proximal Alternating Minimization Algorithm for Two-Block Separable Convex Optimization Problems with Linear Constraints

The Alternating Minimization Algorithm has been proposed by Paul Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of this method does not usually correspond to the calculation of a proximal operator through a closed formula affects the implementability of the algorithm. In this paper, we allow in each block of the objective a further smooth convex function and propose a proximal version of the algorithm, which is achieved by equipping the algorithm with proximal terms induced by variable metrics. For suitable choices of the latter, the solving of the two subproblems in the iterative scheme can be reduced to the computation of proximal operators. We investigate the convergence of the proposed algorithm in a real Hilbert space setting and illustrate its numerical performances on two applications in image processing and machine learning.


Introduction and preliminaries
The Alternating Minimization Algorithm (AMA) has been proposed by Tseng (see [16]) in order to solve optimization problems of the form where f : R n → R := R ∪ {±∞} is a proper, γ-strongly convex with γ > 0 (this means that f − γ 2 • 2 is convex) and lower semicontinuous function, g : R m → R is a proper, convex and lower semicontinuous function, A ∈ R r×n , B ∈ R r×m and b ∈ R r .

Introduction and preliminaries
For c > 0 we consider the augmented Lagrangian associated with problem (1) The Lagrangian associated with problem (1) is The Alternating Minimization Algorithm reads: Algorithm 1. (AMA) Choose p 0 ∈ R r and a sequence of stepsizes (c k ) k≥0 ⊆ (0, +∞).For all k ≥ 0 set: The main convergence properties of this numerical algorithm are summarized in the theorem below (see [16]).Theorem 2. Let A = 0 and (x, z) ∈ ri(dom f ) × ri(dom g) be such that Ax + Bz = b.Assume that the sequence of stepsizes (c k ) k≥0 satisfies where ∈ 0, γ A 2 .Let (x k , z k , p k ) k≥0 be the sequence generated by Algorithm 1. Then there exist x * ∈ R n and an optimal Lagrange multiplier p * ∈ R r associated with the constraint Ax + Bz = b such that If the function z → g(z) + Bz 2 has bounded level sets, then (z k ) k≥0 is bounded and any of its cluster points z * provides with (x * , z * ) an optimal solution of (1).
The strong convexity of f allows to reduce the minimization problem in (2) to the calculation of the proximal operator of a proper, convex and lower semicontinuous function.This is for the minimization problem in (3), due to the presence of the linear operator B, in general not the case.This fact makes the AMA method not very tractable for implementation issues.With the exception of some very particular cases, one has to use a subroutine in order to compute (z k ) k≥0 , a fact which can have a negative influence on the convergence behaviour of the algorithm.One possibility to avoid this, without losing the convergence properties of AMA, is to replace (3) by a proximal step of g.The papers [3] and [9] provide convincing evidences for the versatility and efficiency of proximal point algorithms for solving nonsmooth convex optimization problems.
In this paper we address in a real Hilbert space setting a problem of type (1), which is obtained by adding in each block of the objective a further smooth convex function.To solve this problem we propose a so-called Proximal Alternating Minimization Algorithm (Proximal AMA), which is obtained by inducing in each of the minimization problems (2) and (3) additional proximal terms defined by means of positively semidefinite operators.The two smooth convex functions in the objective are evaluated via gradient steps.We will show that, for appropriate choices of these operators, the minimization problem in (3) reduces to the performing of a proximal step.We perform the convergence analysis of the proposed method and show that the generated sequence converges weakly to a saddle point of the Lagrangian associated with the optimization problem under investigation.The numerical performances of Proximal AMA, in particular in comparison with AMA, are illustrated on two applications in image processing and machine learning.
A similarity of AMA to the classical ADMM algorithm, introduced by Gabay and Mercier in [12], is evident.In [10,15] (see also [1,5]) proximal versions of the ADMM algorithm have been proposed and investigated from the point of view of their convergence properties.Parts of the convergence analysis for Proximal AMA are carried out in a similar spirit to the convergence proofs in these papers.
In the remainder of this section, we discuss some notations, definitions and basic properties we will use in this paper (see [2]).Let H and G be real Hilbert spaces with corresponding inner products •, • and associated norms • = •, • .In both spaces we denote by the weak convergence and by → the strong convergence.
We say that a function f : and is a proper, convex and lower semicontinuous function.It also holds The infimal convolution of two proper functions f , g : The proximal point operator of parameter γ of f at x, where γ > 0, is defined as According to Moreau's decomposition formula we have Let C ⊆ H be a convex and closed set.The strong quasi-relative interior of C is Notice that we allow the Lipschitz constant of the gradient of the function h 1 to be zero.In this case h 1 is an affine function.The same applies for the function h 2 .
The Lagrangian associated with the optimization problem ( 5) is holds for all (x, z, p) ∈ H × G × K.
One can show that (x * , z * , p * ) is a saddle point of the Lagrangian L if and only if (x * , z * ) is an optimal solution of (5), p * is an optimal solution of its Fenchel dual problem and the optimal objective values of ( 5) and ( 6) coincide.The existence of saddle points for L is guaranteed when (5) has an optimal solution and, for instance, the Attouch-Brézis-type condition b ∈ sqri(A(dom f ) + B(dom g)) holds (see [4,Theorem 3.4]).In the finite dimensional setting, this asks for the existence of x ∈ ri(dom f ) and z ∈ ri(dom g) satisfying Ax + Bz = b and coincides with the assumption used by Tseng in [16].
The system of optimality conditions for the primal-dual pair of optimization problems ( 5)-( 6) reads: This means that if (5) has an optimal solution (x * , z * ) and a qualification condition, like for instance (7), is fulfilled, then there exists an optimal solution p * of ( 6) such that (8) holds, consequently, (x * , z * , p * ) is a saddle point of the Lagrangian L. Conversely, if (x * , z * , p * ) is a saddle point of the Lagrangian L, thus, (x * , z * , p * ) satisfies relation (8), then (x * , z * ) is an optimal solution of (5) and p * is an optimal solution of (6).
are two saddle points of the Lagrangian L, then x * 1 = x * 2 .This follows easily by using the strong monotonicity of ∂ f , the monotonicity of ∂g and the relations in (8).
In the following we formulate the Proximal Alternating Minimization Algorithm to solve (5).To this end, we modify Tseng's AMA by evaluating in each of the two subproblems the functions h 1 and h 2 via gradient steps, respectively, and by introducing proximal terms defined through two sequence of positively semidefinite operators (M k 1 ) k≥0 and (M k 2 ) k≥0 .
Remark 6.The sequence (z k ) k≥0 is uniquely determined if there exists This actually ensures that the objective function in the subproblem (10) is strongly convex.
is positively semidefinite and the update of z k+1 in the Proximal AMA method becomes a proximal step.Indeed, (10) holds if and only if But this is nothing else than The convergence of the Proximal AMA method is addressed in the next theorem.
Theorem 8.In the setting of Problem 3 let the set of the saddle points of the Lagrangian L be nonempty.
2 for all k ≥ 0 and that (c k ) k≥0 is a monotonically decreasing sequence satisfying where ∈ 0, γ A 2 .If one of the following assumptions: holds true, then the sequence (x k , z k , p k ) k≥0 generated by Algorithm 5 converges weakly to a saddle point of the Lagrangian L.
Proof.Let (x * , z * , p * ) be a fixed saddle point of the Lagrangian L. This means that it fulfils the system of optimality conditions We start by proving that < +∞ and that the sequences (z k ) k≥0 and (p k ) k≥0 are bounded.Assume that L 1 > 0 and L 2 > 0. Let k ≥ 0 be fixed.Writing the optimality conditions for the subproblems ( 9) and ( 10) we obtain and respectively.Combining ( 13), ( 14), ( 16), ( 17) with the strong monotonicity of ∂ f and the monotonicity of ∂g, it yields which after summation lead to According to the Baillon-Haddad-Theorem (see [2,Corollary 18.16]) the gradients of h 1 and h 2 are 1 L 1 and 1 L 2 -cocoercive, respectively, thus On the other hand, by taking into account (11) and (15), it holds: By employing the last three relations in (18), it yields which, after expressing the inner products by means of norms, becomes Using again (11), the inequality Ax * − Ax k+1 2 ≤ A 2 x * − x k+1 2 and the following expressions and Finally, by using the monotonicity of (M k 2 ) k≥0 and of (c k ) k≥0 , we obtain where If L 1 = 0 (and, consequently, ∇h 1 is constant) and L 2 > 0, then, by using the same arguments, we obtain again (19), but with If L 2 = 0 (and, consequently, ∇h 2 is constant) and L 2 > 0, then, by using the same arguments, we obtain again (19), but with Let be N ≥ 0 fixed.By telescoping we obtain From (19) we also obtain that thus (p k ) k≥0 is bounded, and ∑ k≥0 R k < +∞.Taking ( 12) into account we have c k (2γ and From here we have which, by using ( 11) and ( 15), lead to Suppose that assumption (i) holds true, namely, that there exists α > 0 such that M k 2 − L 2 2 Id ∈ P α (G) for all k ≥ 0. From (20) it follows that (z k ) k≥0 is bounded, while (22) ensures that In the following we show that each weak sequential cluster point of (x k , z k , p k ) k≥0 (notice that the sequence is bounded) is a saddle point of L. Let be ( z, p) ∈ G × K such that the subsequence (x k j , z k j , p k j ) j≥0 converges weakly to (x * , z, p) as j → +∞.From ( 16) we have Since x k j converges strongly to x * and p k j converges weakly to a p as j → +∞, using the continuity of ∇h 1 and the closedness of the graph of the convex subdifferential of f in the strong-weak topology (see [2,Proposition 20.33]), it follows From (17) we have for all j ≥ 0 which is equivalent to and further to By denoting for all j ≥ 0 v j := z k j +1 , u j := p k j , According to (25) we have and, by taking into account (23), it holds Combining (28) with the Lipschitz continuity of ∇h 2 , (24), ( 25) and (11), one can easily see that Due to the monotonicity of the subdifferential it holds which is equivalent to We let j converge to +∞ and receive The maximal monotonicity of the convex subdifferential of (g + h 2 ) * ensures that z ∈ ∂(g + h 2 ) * (B * p), which is the same as B * p ∈ ∂(g + h 2 )( z).In other words, B * p − ∇h 2 ( z) ∈ ∂g( z).Finally, from (11) and ( 24) it follows that Ax * + B z = b.In conclusion, (x * , z, p) is a saddle point of the Lagrangian L.
In the following we show that sequence (x k , z k , p k ) k≥0 converges weakly.Let (x * , z 1 , p 1 ) and (x * , z 2 , p 2 ) be two weak sequential cluster points (x k , z k , p k ) k≥0 .Then there exists (k s ) s≥0 , k s → +∞ as s → +∞, such that the subsequence (x k s , z k s , p k s ) s≥0 converges weakly to (x * , z 1 , p 1 ) as s → +∞.Furthermore there exists (k t ) t≥0 , k t → +∞ as t → +∞, such that that a subsequence (x k t , z k t , p k t ) t≥0 converges weakly to (x * , z 2 , p 2 ) as t → +∞.As seen before, (x * , z 1 , p 1 ) and (x * , z 2 , p 2 ) are both saddle points of the Lagrangian L.
From (20), which is fulfilled for every saddle point of the Lagrangian L, we obtain For all k ≥ 0 we have Id for all k ≥ 0 and (M k 2 ) k≥0 is a monotone sequence of symmetric operators, there exists a symmetric operator M ≥ α + L 2 2 Id such that (M k 2 ) k≥0 converges pointwise to M in the strong topology as k → +∞.Furthermore, let c := lim k→+∞ c k > 0. Taking the limits in (27) along the subsequences (k s ) s≥0 and (k t ) t≥0 , it yields Assume now that condition (ii) holds, namely, that there exists β > 0 such that For the saddle point (x * , z * , p * ) of the Lagrangian L we fixed at the beginning of the proof and the generated sequence (x k , z k , p k ) k≥0 we receive because of (23) that Moreover, The remainder of the proof follows in analogy to the one given under assumption (i).
Indeed, the Fenchel dual problem of ( 30) is (see [2,4]) Since f and g have full domains, strong duality for (30)-(31) holds.We notice that f Remark 7) for every k ≥ 0. The iterative scheme of Proximal AMA becomes for all k ≥ 0: In the case of the anisotropic total variation, the conjugate of g is the indicator function of the set The iterative scheme becomes for all k ≥ 0: In the case of the isotropic total variation, the conjugate of g is the indicator function of the The iterative scheme becomes for all k ≥ 0: 2 ) .We compared the Proximal AMA method with Tseng's AMA method.While in Proximal AMA a closed formula is available for the computation of (q k+1 1 , q k+1 2 ) k≥0 , in AMA we solved the resulting optimization subproblem (q k+1 1 , q k+1 2 ) = argmin q 1 ,q 2 g * (q 1 , q 2 ) − x k+1 , L * (q 1 , q 2 ) + 1 2 c k A * p k+1 + L * (q 1 , q 2 ) 2 in every iteration k ≥ 0 by making some steps of the FISTA method ( [3]).We used in our experiments a Gaussian blur of size 9 × 9 and standard deviation 4, which led to an operator A with A 2 = 1 and A * = A. Furthermore, we added Gaussian white noise with standard deviation 10 −3 .We used for both algorithms a constant sequence of stepsizes c k = 2 − 10 −7 for all k ≥ 0. One can notice that (c k ) k≥0 fulfils (12).In Proximal AMA we considered σ k = 1 8.00001•c k for all k ≥ 0, which ensured that every matrix M k 2 = 1 σ k I − c k L * L is positively definite for all k ≥ 0. This is actually the case, if σ k c k L 2 < 1 for all k ≥ 0. In other words, we guaranteed that assumption (i) in Theorem 8 is fulfilled.
In the figures 2 -5 we show how Proximal AMA and AMA perform when reconstructing the blurred and noisy MATLAB test image "office_ 4" for different choices for the regularization parameter λ and by considering both the anisotropic and isotropic total variation as regularization functionals.For all considered instances one can notice that Proximal AMA outperforms AMA in both the convergence behaviour of the sequence of the function values and of the sequence of ISNR (Improvement in Signal-to-Noise Ratio) values.

Kernel based machine learning
In this subsection we will describe the numerical experiments we carried out in the context of classifying images via support vector machines.
The given data set consisting of 5570 training images and 1850 test images of size 28 × 28 was taken from the website http://www.cs.nyu.edu/roweis/data.html.The problem we considered was to determine a decision function based on a pool of handwritten digits showing either the number five or the number six, labeled by +1 and −1, respectively (see Figure 6).To evaluate the quality of the decision function we compute the percentage of misclassified images of the test data set.In order to describe the approach we used, let be By K ∈ R n×n we denoted the Gram matrix with respect to the training data set Z, namely, the symmetric and positive definite matrix with entries K ij = κ(X i , X j ) for i, j = 1, . . ., n.To penalize the deviation between the predicted value f(x) and the true value y ∈ {+1, −1} we used the hinge loss functional (x, y) → max{1 − xy, 0}.
According to the Representer Theorem, the decision function f can be expressed as a kernel expansion in terms of the training data, i.e., f(•) = ∑ n i=1 x i κ(•, X i ), where x = (x 1 , . . ., x n ) ∈ R n is the optimal solution of the optimization problem min Here, C > 0 denotes the regularization parameter controlling the tradeoff between the loss function and the regularization term.Hence, in order to determine the decision function one has to solve the convex optimization problem (32), which we write as where f : R n → R, f (x) = 1 2 x T Kx, and g : R n → R, g(z) = C ∑ n i=1 max{1 − z i Y i , 0}.Since the Gram matrix K is positively definite, the function f is λ min (K)-strongly convex, where λ min (K) denotes the minimal eigenvalue of K, and differentiable, and it holds ∇ f (x) = Kx for all x ∈ R n .For p = (p 1 , ..., p n ) ∈ R n , we have Consequently, for every µ > 0 and p = (p 1 , ..., We implemented Proximal AMA for M k 2 = 0 for all k ≥ 0 and different choices for the sequence (M k 1 ) k≥0 .This resulted in an iterative scheme which reads for all k ≥ 0: We would like to emphasize that the AMA method updates the sequence (z k+1 ) k≥0 also via (34), while the sequence (x k+1 ) k≥0 , as M k 1 = 0, is updated via x k+1 = p k for all k ≥ 0. However, it turned out that the Proximal AMA where M k 1 = τ k K, for τ k > 0 and all k ≥ 0, performs better than the version with M k 1 = 0 for all k ≥ 0, which actually corresponds to the AMA method.In this case (33) becomes x k+1 = 1 1+τ k (p k + τ k x k ) for all k ≥ 0. We used for both algorithms a constant sequence of stepsizes c k = 2 • λ min (K) K 2 − 10 −8 for all k ≥ 0. The tables below show for C = 1 and different values of the kernel parameter σ that Proximal AMA outperforms AMA in what concerns the time and the number of iterates needed to achieve a certain value for a given fixed misclassification rate (which proved to be the best one among several obtained by varying C and σ) and for the RMSE (Root-Mean-Square-Deviation) for the sequence of primal iterates.

Conclusions and further research
The Proximal AMA method has the advantage over the classical AMA method that it allows to perform a proximal step for the calculation of z k+1 as long as the sequence M k 2 is chosen for all k ≥ 0 appropriately.In this way one can avoid using in every iteration a minimization subroutine.It also has more flexibility due to the presence of the smooth and convex functions h 1 and h 2 .In addition, it allows to use proximal terms induced by variable metrics in the calculation of x k+1 , for all k ≥ 0, too, which may lead to better performances, as shown in the numerical experiments on support vector machines classification.
In the future, it might be interesting to: (1) carry out investigations related to the convergence rates for both the iterates and objective function values of Proximal AMA; as emphasized in [5] for the Proximal ADMM algorithm, the use of variable metrics can have a determinant role in this context, as they may lead to dynamic stepsizes which are favourable to an improved convergence behaviour of the algorithm (see also [6,8]).
(2) consider a slight modification of Algorithm 5, by replacing (11) with where θ ∈ 0, and to investigate the convergence properties of the resulting scheme; it has been noticed in [11] that the numerical performances of the classical ADMM algorithm for convex optimization problems in the presence of a relaxation parameter θ ∈ 1, outperform the ones obtained when θ = 1.
(3) embed the investigations made in this paper in the more general framework of monotone inclusion problems, as it was recently done in [2] starting from the Proximal ADMM algorithm.
) k≥0 converges weakly to a saddle point of the Lagrangian L.

Figure 1 :
Figure 1: The original image, the blurred and noisy image and the reconstructed image after 50 seconds cpu time.

Figure 2 :
Figure 2: The objective function values and the ISNR values for the anisotropic TV and λ = 5 • 10 −5 .

Figure 3 :
Figure 3: The objective function values and the ISNR values for the anisotropic TV and λ = 10 −5 .

Figure 4 :
Figure 4: The objective function values and the ISNR values for the isotropic TV and λ = 5 • 10 −5 .

Figure 5 :
Figure 5: The objective function values and the ISNR values for the isotropic TV and λ = 10 −4 .

Figure 6 :
Figure 6: A sample of images belonging to the classes +1 and −1, respectively.
the given training data set.The decision functional f was assumed to be an element of the Reproducing Kernel Hilbert Space (RHKS) H κ , induced by the symmetric and finitely positive definite Gaussian kernel function κ Id(x) = x for all x ∈ H, denotes the identity operator on H. Let A : H → G be a linear continuous operator.The operator A * : G → H, fulfilling A * y, x = y, Ax for all x ∈ H and y ∈ G, denotes the adjoint operator of A, while A := sup{ Ax : x ≤ 1} denotes the norm of A. Let H, G and K be real Hilbert spaces, f ∈ Γ(H) γ-strongly convex with γ > 0, g ∈ Γ(G), is a closed linear subspace of H} .We always have int(C) ⊆ sqri(C) and, if H

Table 1 :
misclassification rate at 0.7027 % RMSE ≤ 10 −3 Performance evaluation of Proximal AMA (with τ k = 10 for all k ≥ 0) and AMA for the classification problem with C = 1 and σ = 0.2.The entries refer to the CPU times in seconds and the number of iterations.

Table 2 :
Performance evaluation of Proximal AMA (with τ k = 102 for all k ≥ 0) and AMA for the classification problem with C = 1 and σ = 0.25.The entries refer to the CPU times in seconds and the number of iterations.