A general double-proximal gradient algorithm for d.c. programming

The possibilities of exploiting the special structure of d.c. programs, which consist of optimizing the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimization problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka--\L ojasiewicz property. In the last part, we apply the algorithm to an image processing model.


Introduction
Optimisation problems where the objective function can be written as a difference of two convex functions arise naturally in several applications, such as image processing [1], machine learning [2], optimal transport [3] and sparse signal recovering [4]. Generally, the class of d.c. functions is rather broad and contains for example every twice continuously differentiable function. For an overview over d.c. functions, see e.g. [5].
The classical approach to iteratively find local extrema of d.c. problems was described by Tao and An [6] in 1997 under the name DCA (d.c. algorithms). One of the most recent papers on this topic is [7], where an accelerated variant of the DCA method is proposed under the supplementary assumption that both the convex and the concave part are continuously differentiable. In 2003, Sun, Sampaio and Candido introduced a proximal point approach into the theory of d.c. algorithm [8], where the convex part is evaluated by its proximal point operator, while its concave part is still evaluated by one of its subgradients. Later on, the approach in [8] has been extended in [9][10][11] by considering in the convex part a further convex smooth summand that is evaluated via its gradient.
In this paper, we go one step further by proposing an algorithm, where both the convex and concave parts are evaluated via proximal steps. In convex optimisation, using proximal steps instead of subgradient steps has several advantages: • The subdifferential at a point may be a non-singleton set, in particular it may be empty or may consist of several distinct elements. In an algorithm, one may get stuck or have to choose one, respectively. • Even if the subdifferential is a singleton in each step, it might be highly discontinuous, so small deviations might lead to a very different behaviour of the iterations. • Better convergence rates can be guaranteed for proximal algorithms than for subgradient algorithms (compare [12] and [13,Theorem 3.2.3]).
In addition, we consider a linear operator in the concave part of the objective function, which is evaluated in a forward manner in the spirit of primal-dual splitting methods.
In Sect. 2, we present the problem to be solved together with its Toland dual and attach to them a primal-dual formulation in the form of a minimisation problem, too. We derive first-order optimality conditions, relate the optimal solutions and the critical points of the primal-dual minimisation problems to the optimal solutions and, respectively, the critical points of both primal and dual optimisation problems.
In Sect. 3, we propose a double-proximal d.c. algorithm, which generates both a primal and a dual sequence of iterates and show several properties which make it comparable to DCA. More precisely, we prove a descent property for the objective function values of a primal-dual formulation and that every cluster point of the sequence of primal iterates is a critical point of the primal problem, while every critical point of the sequence of dual iterates is a critical point of the dual problem.
In Sect. 4, we show global convergence of our algorithm and convergence rates for the iterates in certain cases, provided that the objective function of the primal-dual reformulation satisfies the Kurdyka-Łojasiewicz property; in other words, it is a KŁ function. The convergence analysis relies on methods and concepts of real algebraic geometry introduced by Łojasiewicz [14] and Kurdyka [15] and later developed in the nonsmooth setting by Attouch, Bolte, Redont, and Soubeyran [16], Attouch, Bolte, and Svaiter [17] and Bolte, Sabach and Teboulle [18]. One of the remarkable properties of the KŁ functions is their ubiquity in applications (see [18]). The class of KŁ functions contains semi-algebraic, real sub-analytic, semiconvex, uniformly convex and convex functions satisfying a growth condition.
We close our paper with some numerical examples addressing an image deblurring and denoising problem in the context of different d.c. regularisations.

Notation and preliminaries
For the theory of convex analysis in finite-dimensional spaces, see the book [19]. We shall consider functions taking values in the extended real line R := R ∪ {+∞, −∞}. We agree on the order −∞ < a < +∞ for any real number a and the operations for arbitrary a ∈ R (see [20]). Let H be a real finite-dimensional Hilbert space. For a function f : H → R, we denote by for all x, y ∈ H and 0 ≤ λ ≤ 1. The conjugate function f * : If f is proper, convex, and lower semicontinuous, then f * * := ( f * ) * = f by the Fenchel-Moreau theorem.
otherwise. Let γ > 0 and f : H → R be proper, convex, and lower semicontinuous.
The set of minimisers in the definition above is a singleton [21,Proposition 12.15], and the proximal point is characterised by the variational inequality [21,Proposition 12.26] for all y ∈ H, which is equivalent to When dealing with nonconvex and nonsmooth functions, we have to consider subdifferentials more general than the convex one. The Fréchet subdifferential ∂ F f (x) at x ∈ H of a proper and lower semicontinuous function f : otherwise.

Problem statement
Let G and H be real finite-dimensional Hilbert spaces, let g : H → R and h : G → R be proper, convex, and lower semicontinuous functions, let ϕ : H → R be a convex, Fréchet differentiable function with 1 β -Lipschitz continuous gradient, for some β > 0, and let K : H → G be a linear mapping (and K * : G → H its adjoint). We consider the problem together with its Toland dual problem [22,23] min h * (y) − (g + ϕ) * K * y y ∈ G .
The following primal-dual formulation will turn out to be useful in the sequel: 3. Letx ∈ H be a solution of (2).
Proof 1. By the Fenchel-Moreau theorem, applied to h, we have 2. Let x ∈ H and y ∈ G. Then, and the other inequality is verified by an analogous calculation. 3. This kind of optimality condition is classical in d.c. programming, see e.g. [8, Proposition 1.1]. 4. The proof of this statement is analogous. 5. Let (x,ȳ) be a solution of (4). (In particular, if such a solution exists, the common optimal value of (2), (3) and (4) must be finite.) The function x → (x,ȳ) is convex and takes a minimum atx. Therefore which proves (5). The same argument works for the function y → (x, y) and implies which is (6). For these inclusions, we obtain equality in the Young-Fenchel inequality [21, Proposition 16.9], i.e., Therefore, by subtracting these equalities, Since (x,ȳ) is a solution of (4), the last expression equals the common optimal value of (2), (3), and (4).

Definition 1
We say that (x,ȳ) ∈ H × G is a critical point of the objective function of (4) if the inclusions (5) and (6) are satisfied. We denote by crit the set of critical points of the function .
By adopting the terminology of e.g. [6, p. 297], we denote by the set of critical points of the objective function g + ϕ − h • K of (2) and by

The algorithm
Let (x 0 , y 0 ) ∈ H × G, and let (γ n ) n≥0 and (μ n ) n≥0 be sequences of positive numbers. We propose the following iterative scheme: For all n ≥ 0 set By the inequalities for the proximal points, we have, for every x, y ∈ H and n ≥ 0, Moreover, using [21, Theorem 18.15 (iii)] and the subdifferential inequality, we have for every x ∈ H and n ≥ 0, We consider the auxiliary function : By the inequalities above, we have, for arbitrary x ∈ H, y ∈ G and n ≥ 0, Furthermore, for any n ≥ 0, The last two inequalities give rise to the following statement.

Proposition 3 Let
Proof Let N ≥ 1 be an integer. Sum up (12) and (13) for n = 0, . . . , N − 1 and obtain By assumption, the expression on the left-hand side is bounded from below by a fixed real number M for any N ≥ 1, and so is the right-hand side. The numbers 1 γ n − 1 2β and 1 μ n are bounded from below by a positive number, say ε > 0, so Since N is arbitrary, the series converge. (14) be satisfied.
If (x n ) n≥0 and (y n ) n≥0 are bounded, then 1. Every cluster point of (x n ) n≥0 is a critical point of (2), 2. Every cluster point of (y n ) n≥0 is a critical point of (3) and 3. Every cluster point of (x n , y n ) n≥0 is a critical point of (4).
Proof Letx be a cluster point of (x n ) n≥0 . Let x n k k≥0 be a subsequence of (x n ) n≥0 such that x n k →x. By another transition to a subsequence, we can guarantee y n k →ȳ for someȳ ∈ H, since y n k k≥0 is bounded. By (9) and (10), we obtain, for every k ≥ 0, and y n k − y n k +1 μ n k + K x n k +1 ∈ ∂h * y n k +1 , respectively. By Proposition 3, the first summands on the left-hand side of the above inclusions tend to zero as k → ∞. Using the continuity of ∇ϕ and the closedness of the graphs of ∂g and ∂h * and passing to the limit, we get K * ȳ − ∇ϕ(x) ∈ ∂g(x) and Kx ∈ ∂h * (ȳ), which means that (x,ȳ) is a critical point of . The first statement follows by considering Remark 1. For the second statement, one has to choosex and y in reverse order, for the third one, they are chosen at the same time.
Remark 2 It is clear that one cannot expect the cluster points to be minima, since it is easy to see that (x,ȳ) is a fixed point of the iteration (9)-(10) if and only if (5) and (6) are satisfied, i.e., if and only if (x,ȳ) is a critical point for (independent of the choice of the parameters (γ n ) n≥0 and (μ n ) n≥0 ).

Remark 3
One should notice that the iterative scheme given by (9) and (10) does not use any subgradients, but only proxmial points and gradients, which are continuous with respect to the input. In contrast, the DCA and its variants use the subgradients of at least one of the involved functions. The performance might therefore depend on the subgradient oracle, whereas our algorithm is determined by the choice of the starting points and the stepsize sequences alone. This is especially an issue when dealing with nonsmooth functions like the 1 norm. have two primal-dual stationary points (x, y) = (0, 0) and (x, y) = (−1, −1), but only the latter is a local minimum for any of these problems. On the other hand, the classical DCA might converge to the former stationary point for an unfavourable choice of the subgradients. The same might happen to the double-proximal d.c. algorithm, see Fig. 1. A possible solution to the problem of getting stuck in stationary points which are not local minima is the use of inertial techniques according to Polyak [25], which are already well-established in proximal algorithms for convex and nonconvex problems, see e.g. [26,27]. (14) be satisfied. For any n ≥ 0, the following statements are equivalent:

Proposition 5 Let
Proof It is easily seen by the formula (1) that the first two statements are equivalent. The equivalence of the latter two follows by (12) and (13).
Next, we summarise the convergence properties of the prox-prox algorithm. To this end, we denote by ω(x 0 , y 0 ) the set of cluster points of the iteration generated by (9) and (10) with the initial points x 0 and y 0 . See also [18,Lemma 5] for an analogous result for a nonconvex forward-backward scheme.

Lemma 1
Let H and G be two real finite-dimensional Hilbert spaces, let g : H → R and h : G → R be proper, convex, and lower semicontinuous functions. Let ϕ : H → R be a convex, Fréchet differentiable function with a 1 β -Lipschitz continuous gradient, for some β > 0, and let K : H → G be a linear mapping. Let the sequences (γ n ) n≥0 and (μ n ) n≥0 satisfy (14). Moreover, assume that the sequence (x n , y n ) n≥0 generated by (9) and (10) is bounded. Then the following assertions hold: if the common optimal value of the problems (2), (3), and (4) is > −∞, then ω(x 0 , y 0 ) is a nonempty, compact, and connected set, and so are the sets of the limit points of the sequences (x n ) n≥0 and (y n ) n≥0 , 4. the objective function is finite and constant on ω(x 0 , y 0 ) provided that the optimal value is finite. Proof 1. It is clear that the set of cluster points of a bounded sequence is nonempty.
That every cluster point is critical for , is the statement of Proposition 4. The last inclusion is discussed in Remark 1. 2. Assume that the assertion does not hold. In this case, there exists an ε > 0 and a subsequence x n k , y n k k≥0 of (x n , y n ) n≥0 with dist x n k , y n k , ω(x 0 , y 0 ) > ε for all k ≥ 0. The subsequence is bounded, so it has a cluster point, which is a cluster point of the original sequence (x n , y n ) n≥0 as well, thus an element of ω(x 0 , y 0 ). This contradicts the assumption dist x n k , y n k , ω(x 0 , y 0 ) > ε for all k ≥ 0. 3. Since the sequence (x n , y n ) n≥0 is bounded, the sets are bounded and closed, hence compact for any k ≥ 0. Their intersection n≥0 n , which equals the set of cluster points of (x n , y n ) n≥0 , is therefore compact, too. The connectedness follows from the property given by Proposition 3, and the proof is completely analogous to the one of [18, Lemma 5 (iii)]. 4. According to Proposition 2, the function values (x n , y n ) are monotonically decreasing, thus convergent, say (x n , y n ) → . Let (x,ȳ) be an arbitrary limit point of the sequence (x n , y n ) n≥0 , and let x n k , y n k k≥0 be a subsequence converging to (x,ȳ) as k → ∞. By lower semicontinuity, we have (x,ȳ) ≤ lim k→∞ x n k , y n k = . On the other hand, consider (11) with x =x and y =ȳ. The right-hand side converges to 0 as we let n → ∞ along the subsequence (n k ) k≥0 , so = lim n→∞ (x n , y n ) ≤ (x,ȳ).

Remark 4
To guarantee the boundedness of the iterates, one could assume that the objective function of the primal-dual minimisation problem (4) is coercive, i.e., the lower level sets are bounded.

Definition 2
Let H be a real finite-dimensional Hilbert space, and let : H → R be a proper and lower semicontinuous function. We say that satisfies the Kurdyka-Łojasiewicz property atx ∈ dom ∂ L := {x ∈ H | ∂ L (x) = ∅} if there exists some η ∈ (0, +∞], a neighbourhood U ofx and a function ϑ ∈ η such that for all the following inequality holds: We call a KŁ function if it satisfies the Kurdyka-Łojasiewicz property at each point x ∈ dom ∂ L .
The following uniform KŁ property is according to [18,Lemma 6].

Lemma 2 Let be a compact set, and let
: H → R be a proper and lower semicontinuous function. Assume that is constant on and satisfies the KŁ property at each point of . Then there exist ε > 0, η > 0, and ϑ ∈ η such that for allū ∈ and all u in the intersection In the KŁ property, we need the distance of a subgradient from zero. In our algorithm, we have the following result.

The case when is a KŁ function
Theorem 1 Let Suppose that is in addition a KŁ function bounded from below. Then (x n , y n ) n≥0 is a Cauchy sequence, thus convergent to a critical point of .
In the following, fix an arbitrary n ≥ n 0 := max {n 1 , n 2 , 1}. Then (x n , y n ) is an element of the intersection (15). Consequently, By the concavity of ϑ, we get, for all s ∈ (0, η), so, setting in particular s := (x n+1 , y n+1 ) − (x,ȳ) ∈ (0, η), Moreover, by (12) and (13), Let us define the following shorthands: for n ≥ n 0 to obtain the inequality By the arithmetic-geometric inequality, for any r > 0 and n ≥ n 0 (recall that, by Proposition 2 and the properties of ϑ, the sequence (ε n ) n≥n 0 is decreasing, so ε n −ε n+1 ≥ 0). On the other hand, by Lemma 3 and the inequality 2ab ≤ a 2 +b 2 (a, b ≥ 0), for any n ≥ n 0 with For all n ≥ n 0 , Combined with (18), we obtain For any k ≥ n 0 + 1, we have, by iteration, therefore, for any N ≥ n 0 + 1 and 0 < r < 1 C 0 , The last right-hand side does not depend on N , thus, we conclude that ∞ k=n 0 +1 δ k is finite, and so are ∞ k=n 0 +1 x n − x n+1 and ∞ k=n 0 +1 y n − y n+1 .

Convergence rates
Lemma 4 Assume that is a KŁ function with ϑ(t) = Mt 1−θ for some M > 0 and 0 ≤ θ < 1. Letx andȳ the limit points of the sequences (x n ) n≥0 and (y n ) n≥0 , respectively (which exist due to Theorem 1). Then the following convergence rates are guaranteed: 1. If θ = 0, then there exists n 0 ≥ 0, such that x n = x n 0 and y n = y n 0 for n ≥ n 0 ; 2. If 0 < θ ≤ 1 2 , then there exist c > 0 and 0 ≤ q < 1 such that x n −x ≤ cq n and y n −ȳ ≤ cq n for all n ≥ 0; 3. If 1 2 < θ < 1, then there exists c > 0 such that for all n ≥ 0.
Proof 1. First, let θ = 0. Assume to the contrary (see Proposition 5) that for any n ≥ 0, (x n+1 , y n+1 ) = (x n , y n ). We have ϑ (t) = M for all t > 0 and thus, by (17), M · x * n , y * n ≥ 1 for any n ≥ 1, which contradicts either Lemma 3 or Proposition 3. Before considering the other cases, assume from now on that (x n , y n ) is not a critical point of for any n ≥ 0. Notice that ϑ (t) = (1 − θ )Mt −θ . In the proof of Theorem 1, we have shown that for 0 < r < 1 which proves the assertion. 3. Let 1 2 < θ < 1. Then 0 < 1−θ θ < 1, so δ n → 0 as n → ∞ implies that the second term on the right-hand side of (21) is the dominant one. Therefore, we find n 1 ≥ n 0 + 1 and C 1 > 0 such that for any n ≥ n 1 . Then, for any n ≥ n 1 , We define h : (0, +∞) → R, h(s) = s − θ 1−θ and notice that h is monotonically decreasing as is the sequence ∞ k=n δ k n≥n 1 . Therefore, for any n ≥ n 1 , Thus, by induction, for any n ≥ n 1 + 1, The assertion follows by for any n ≥ n 1 + 1.

Application to image processing
Consider an image of the size m × n pixels. (For the sake of simplicity, we consider gray-scale pictures only.) It can be represented by a vector x ∈ H := R mn of size mn with entries in [0, 1] (where 0 represents pure black and 1 represents pure white). The original image x ∈ H is assumed to be blurred by a linear operator L : H → H (e.g. the camera is out of focus or in movement during the exposure). Furthermore, it is corrupted with a noise ν, so that only the result b = Lx + ν is known to us. We want to reconstruct the original image x by considering the minimisation problem where we denote by · the usual Euclidean norm, μ > 0 is a regularisation parameter, D : R mn → R 2mn is the discrete gradient operator given by Dx = (K 1 x, K 2 x), where and J : H → R is a regularising functional penalising noisy images. We want to compare several choices of the functional J proposed by [1,4], all of which have in common that they want to induce sparsity of Dx, i.e., having many components equal to zero. The Zhang penalty [30] is defined by where α > 0 and Denoting the part after the curly brace as h α (z j ) and h α (z) := 2mn j=1 h α (z j ), we have The LZOX penalty [1] is defined by where · 1 denotes (as usual) the sum of the absolute values and where y = (u, v) is the splitting according to the definition of D. The algorithm (9)-(10) can now be applied to any of the models described above, since the models are written as d.c. problems and the components are easily accessible for computation, with the exception of the function · 1 • D, see [31]. For the latter, see the following section.

The proximal point of the anisotropic total variation
In order to apply Algorithm (9)-(10) to any of the problems, we have to calculate the proximal point of the anisotropic total variation by solving the optimisation problem for some γ > 0 and b ∈ H in each step. The Fenchel dual problem [21,Chapter 19] is given by inf Instead of solving (22), we could also solve (23) (see [32]), as the following result shows.
Lemma 5 Let x * ∈ G be a solution of (23). Then x = b − γ D * x * is a solution of (22).
Proof See [21,Example 19.7]. In short: To the formulation (23), the accelerated forward-backward algorithm of Beck and Teboulle [12] can be applied, since the objective function is differentiable and the feasible set is easy to project on.

Numerical results
We implemented the FBDC algorithm applied to the model described above and tested the MATLAB code on a PC with Intel Core i5 4670S (4× 3.10 GHz) and 8 GB DDR3 RAM (1600 MHz). Our implementation used the method described in Sect. 5.1 until the ∞ distance between two iterations was smaller than 10 −4 . Both stepsizes were chosen as μ n = γ n = 1 8μ for all n ≥ 0. As initial value, we chose x 0 = b and picked v 0 ∈ ∂h(K x 0 ).
We picked the image texmos3 from http://sipi.usc.edu/database/database.php? volume=textures&image=64 and convolved it with a Gaussian kernel with 9 pixels standard devitation. Afterwards we added white noise with standard deviation 50/255, projected the pixels back to the range [0, 1] and saved the image in TIFF format, rounding the brightness values to multiples of 1/255. See Fig. 2 for original, blurry, and reconstructed image.
The improvement in signal-to-noise ratio or ISNR value of a reconstruction is given by ISNR(x k ) = 10 log 10 x where x is the (usually unknown) original, b is the known blurry and noisy and x k is the reconstructed image. For the ISNR values after 50 iterations, see Tables 1 and 2. The development of the ISNR values over the iterations is shown in Fig. 3. We see that the nonconvex models provide reasonable reconstructions of the original image and the best numerical performance for this particular choice of the stepsizes and the number of iterations is not achieved for the convex model (LZOX with α = 0), but for the nonconvex models.