Abstract
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–Łojasiewicz property, we derive a selfcontained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvexnonsmooth problems and related nonconvex proximal forward–backward algorithms with semialgebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A byproduct of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.
Similar content being viewed by others
1 Introduction
Minimizing the sum of a finite collections of given functions has been at the heart of mathematical optimization research. Indeed, such an abstract model is a convenient vehicle which includes most practical models arising in a wide range of applications, whereby each function can be used to describe a specific required property of the problem at hand, either as an objective or as a constraint or both. Such a structure, while very general, still often allows one to beneficially exploit mathematical properties of the specific functions involved to devise simple and efficient algorithms. Needless to say that the literature in optimization research and its applications covering such a model is huge, and the present paper is not intended to review it. For some pioneering and early works that realized the potential of the sum optimization model, see for instance, Auslender [4], and Bertsekas and Tsitsiklis [12], with references therein.
Recently there has been a revived interest in the design and analysis of algorithms for solving optimization problems involving sum of functions, in particular is signal/image processing and machine learning. The main trend is solving very large scale problems, exploiting special structures/properties of the problem data toward the design of very simple scheme (e.g., matrix/vector multiplications), yet capable of producing reasonable approximate solutions efficiently. In order to achieve these goals, the focus of this recent research has been with a particular emphasis on the development and analysis of algorithms for convex models which either describe a particular application at hand or is used as a relaxation for tackling an original nonconvex model. We refer the reader to the two very recent edited volumes [29] and [32] for a wealth of relevant and interesting works covering a broad spectrum of theory and applications which reflects this intense research activity.
In this work, we completely depart from the convex setting. Indeed, in many of the alluded applications, the original optimization model is often genuinely nonconvex and nonsmooth. This can be seen in a wide array of problems such as: compressed sensing, matrix factorization, dictionary learning, sparse approximations of signals and images, and blind decomposition, to mention just a few. We thus consider a broad class of nonconvexnonsmooth problems of the form
where the functions \(f\) and \(g\) are extended valued (i.e., allowing the inclusion of constraints) and \(H\) is a smooth function (see more precise definitions in the next section). We stress that throughout this paper, no convexity whatsoever will be assumed in the objective or/and the constraints. Moreover, we note that the choice of two block of variables is for the sake of simplicity of exposition. Indeed, all the results derived in this paper hold true for a finite number of blockvariables, see Sect. 3.6.
This model is rich enough to cover many of the applications mentioned above, and was recently studied in the work of Attouch et al. [2] which also provides the motivation of the present work. The standard approach to solve Problem \((M)\) is via the socalled GaussSeidel iteration scheme, popularized in modern era under the name alternating minimization. That is, starting with some given initial point \(\left( x^{0} , y^{0}\right) \), we generate a sequence \(\left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\) via the scheme
Convergence results for the GaussSeidel method, also known as coordinate descent method, can be found in several studies, see e.g., [4, 12, 28, 33]. One of the key assumptions necessary to prove convergence is that the minimum in each step is uniquely attained, see e.g., [34]. Otherwise, as shown in Powell [30], the method may cycle indefinitely without converging. In the convex setting, for a continuously differentiable function \(\varPsi \), assuming strict convexity of one argument while the other is fixed, every limit point of the sequence \(\left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\) generated by this method minimizes \(\varPsi \), see e.g., [12]. Very recently, in [10], global rate of convergence results have been derived for block coordinate gradient projection algorithm for convex and smooth constrained minimization problems.
Removing the strict convexity assumption can be achieved by coupling the method with a proximal term, that is to consider the proximal regularization of the GaussSeidel scheme:
where \(c_{k}\) and \(d_{k}\) are positive real numbers. In fact, such an idea was already suggested by Auslender in [6]. It was further studied in [7] with a nonquadratic proximal term to handle linearly constrained convex problems, and further results can be found in [19]. In all these works, only convergence of the subsequences can be established. In the nonconvex and nonsmooth setting, which is the focus of this paper, the situation becomes much harder, see e.g., [33].
The present work is motivated by two very recent papers by Attouch et al. [2, 3], which appear to be the first works in the general nonconvex and nonsmooth setting, establishing in [2] convergence of the sequences generated by the proximal GaussSeidel scheme (see 1.1, 1.2), while in [3], a similar result was proven for the wellknown proximalforward–backward (PFB) algorithm applied to the nonconvex and nonsmooth minimization of the sum of a nonsmooth function with a smooth one (i.e., Problem \((M)\) with no \(y\)). Their approach relies on assuming that the objective function \(\varPsi \) to be minimized satisfies the socalled Kurdyka–Łojasiewicz (KL) property [22, 25], which was developed for nonsmooth functions by Bolte et al. [16, 17] (see Sect. 2.4).
In both of these works, the suggested approach gains its strength from the fact that the class of functions satisfying the KL property is considerably large, and cover a wealth of nonconvexnonsmooth functions arising in many fundamental applications, see more in the forthcoming Sect. 3 and in the Appendix.
Clearly, the scheme (1.1) and (1.2) always produce a nonincreasing sequence of function values, i.e., for all \(k \ge 0\) we have
and the sequence \(\{ \varPsi (x^{k} , y^{k}) \}_{k \in \mathbb N }\) is bounded from below by \(\inf \varPsi \). Thus, with \(\inf \varPsi > \infty \), the sequence \(\{ \varPsi (x^{k} , y^{k}) \}_{k \in \mathbb N }\) converges to some real number, and as proven in [2], assuming that the objective function \(\varPsi \) satisfies the KL property, every bounded sequence generated by the proximal regularized GaussSeidel scheme (1.1) and (1.2) converges to a critical point of \(\varPsi \). These are nice properties for the alluded scheme above. However, this scheme is conceptual, and not really a “true” algorithm, in the sense that it suffers from (at least) two main drawbacks. First, each step requires exact minimization of a nonconvex and nonsmooth problem. Secondly, it is a nested scheme which implies two nontrivial issues: (i) accumulations of computational errors in each step, and (ii) how and when to stop each step before passing to the next.
The above drawbacks motivates a very simple and naive approach, which can be traced back to Auslender [5] for smooth unconstrained minimization. Thus, for the more general Problem \((M)\), for each block of coordinate perform one gradient step on the smooth part, while a proximal step is taken on the nonsmooth part. This idea contrasts with the entirely implicit step required by the proximal version of the GaussSeidel method (1.1) and (1.2), that is here, we consider an approximation of this scheme via the wellknown and standard proximal linearization of each subproblem. This yields the Proximal Alternating Linearized Minimization (PALM) algorithm, whose exact description is given in Sect. 3.1. Thus, the root of our method can be viewed as nothing else but an alternating minimization approach for the socalled Proximal Forward–Backward (PFB) algorithm. Let us mention that the PFB algorithm has been extensively studied and successfully applied in many contexts in the convex setting, see e.g., the recent monograph of Bauschke and Combettes [8] for a wealth of fundamental results and references therein.
Now, we briefly streamline the novelty of our approach and our contributions. First, the coupling of the GaussSeidel proximal scheme with PFB does not seem to have been analyzed in the literature within such a general nonconvex and nonsmooth setting proposed here. It allows to eliminate the difficulties evoked above with the scheme (1.1) and (1.2) and leads to a simple and tractable algorithm PALM, with global convergence results for nonconvex and nonsmooth semialgebraic problems.
Secondly, while a part of the convergence result we develop in this article falls in the scope of a general convergence mechanism introduced and described in [3], we present here a selfcontained thorough proof that avoids the use of these abstract results. The motivation stems from the fact that we target applications for which KL property holds at each point of the underlying space. Functions having this property are called KL functions. A very wide class of KL functions is provided by tame functions; these include in particular nonsmooth semialgebraic and real subanalytic functions (see, e.g., [2] and references therein). This property allows, through a “uniformization” result inspired by Attouch and Bolte [1] (see Lemma 6) to considerably simplify the main arguments of the convergence analysis and avoid involved induction reasoning.
A third consequence of our approach is to provide a stepbystep analysis of our algorithm which singles out, at each stage of the convergence proof, the essential tools that are needed to get to the next stage. This allows one to understand the main ingredients at play and to evidence the exact role of KL property in the analysis of algorithms in the nonconvex and nonsmooth setting, see more details in Sect. 3.2, where we outline a sort of “recipe” for proving global convergence results that could be of benefit to analyze many other optimization algorithms.
A fourth implication is that our block coordinate approach allows to get rid of a restrictive assumption inherent to the proximal forward–backward algorithm and which is often overlooked: the gradient of the smooth part \(H\) has to be globally Lipschitz continuous. This requirement often reduces the potential of applying PFB in concrete applications. On the contrary, our approach provides a flexibility that allows to deal with more general problems (e.g., componentwise quadratic forms) or with some illconditioned quadratic problems. Indeed, the stepsizes in PALM may be adjusted componentwise in order to fit as much as possible the structure of the problem at hand, see Sect. 4 for an interesting application. Another byproduct of this work is that it can also be applied to the convex version of Problem \((M)\) for which convergence results are quite limited. Indeed, even for convex problems our convergence results are new (see the Appendix). Finally, to illustrate our results, we present a simple algorithm proven to converge to a critical point for a broad class of nonconvex and nonsmooth nonnegative matrix factorization problems, which to the best of our knowledge appears to be the first globally convergent algorithm for this important class of problems.
Outline of the paper. The paper is organized as follows. In the next section we define the problem, make precise our setting, and we collect a few preliminary basic facts on nonsmooth analysis, on proximal maps for nonconvex functions and we introduce the KL property. In Sect. 3 we state the algorithm PALM, derive some elementary properties and then develop a systematic approach to establish our main convergence results (see Sect. 3.2). In particular we clearly specify when and where the KL property is playing a fundamental role in the overall convergence analysis. Section 4 illustrates our results on a broad class of nonconvex and nonsmooth matrix factorization problems. Finally, to make this paper selfcontained, we include an appendix which summarizes some wellknown and relevant results on the KL property including some useful examples of KL functions. Throughout the paper, our notations are quite standard and can be found, for example, in [31].
2 The problem and some preliminaries
2.1 The problem and basic assumptions
We are interested in solving the nonconvex and nonsmooth minimization problem
Following [2], we take the following as our blanket assumption.
Assumption 1

(i)
\(f : \mathbb R ^{n} \rightarrow \left( \infty , +\infty \right] \) and \(g : \mathbb R ^{m} \rightarrow \left( \infty , +\infty \right] \) are proper and lower semicontinuous functions.

(ii)
\(H : \mathbb R ^{n} \times \mathbb R ^{m} \rightarrow \mathbb R \) is a \(C^{1}\) function.
2.2 Subdifferentials of nonconvex and nonsmooth functions
Let us recall few definitions concerning subdifferential calculus (see, for instance, [27, 31]). Recall that for \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) a proper and lower semicontinuous function, the domain of \(\sigma \) is defined through
Definition 1
(Subdifferentials) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) be a proper and lower semicontinuous function.

(i)
For a given \(x \in {\hbox {dom}}\,{\sigma }\), the Fréchet subdifferential of \(\sigma \) at \(x\), written \(\widehat{\partial } \sigma (x)\), is the set of all vectors \(u \in \mathbb R ^{d}\) which satisfy
$$\begin{aligned} \liminf _{y \ne x \text { } y \rightarrow x} \frac{\sigma (y)  \sigma (x)  \left\langle {u , y  x} \right\rangle }{\left\ {y  x} \right\ } \ge 0. \end{aligned}$$When \(x \notin {\hbox {dom}}\,{\sigma }\), we set \(\widehat{\partial } \sigma \left( x\right) = \emptyset \).

(ii)
The limitingsubdifferential [27], or simply the subdifferential, of \(\sigma \) at \({x \in \mathbb R ^n}\), written \(\partial \sigma \left( x\right) \), is defined through the following closure process
$$\begin{aligned} \partial \sigma \left( x\right) \!:=\! \left\{ u \!\in \! \mathbb R ^{d} : \exists x^{k} \!\rightarrow \! x, \sigma (x^{k}) \!\rightarrow \! \sigma (x) \; \text{ and } \; u^{k} \in \widehat{\partial } \sigma (x^{k}) \rightarrow u \; \text {as} \; k \rightarrow \infty \right\} \!\,. \end{aligned}$$
Remark 1
(i) We have \(\widehat{\partial } \sigma \left( x\right) \subset \partial \sigma (x)\) for each \(x \in \mathbb R ^{d}\). In the previous inclusion, the first set is closed and convex while the second one is closed (see [31, Theorem 8.6, page 302]).

(ii)
Let \(\{ (x^{k} , u^{k}) \}_{k \in \mathbb N }\) be a sequence in \(\hbox {graph}\,{\left( \partial \sigma \right) }\) that converges to \(\left( x , u\right) \) as \(k \rightarrow \infty \). By the very definition of \(\partial \sigma \left( x\right) \), if \(\sigma (x^{k})\) converges to \(\sigma (x)\) as \(k \rightarrow \infty \), then \(\left( x , u\right) \in \hbox {graph}\,{(\partial \sigma )}\).

(iii)
In this nonsmooth context, the wellknown Fermat’s rule remains barely unchanged. It formulates as: “if \(x \in \mathbb R ^{d}\) is a local minimizer of \(\sigma \) then \( 0 \in \partial \sigma \left( x\right) \)”.

(iv)
Points whose subdifferential contains \(0\) are called (limiting)critical points.

(v)
The set of critical points of \(\sigma \) is denoted by \(\hbox {crit}\,{\sigma }\).
Definition 2
(Sublevel sets) Being given real numbers \(\alpha \) and \(\beta \) we set
We define similarly \(\left[ \alpha < \sigma < \beta \right] \). The level sets of \(\sigma \) are simply denoted by
Let us recall a useful result related to our structured Problem \((M)\), see e.g., [31].
Proposition 1
(Subdifferentiability property) Assume that the coupling function \(H\) in Problem \((M)\) is continuously differentiable. Then for all \(\left( x , y\right) \in \mathbb R ^{n} \times \mathbb R ^{m}\) we have
Remark 2
Recall that for any set \(S\), both \(S + \emptyset \) and \(S \times \emptyset \) are empty sets, so that the above formula makes sense over the whole product space \(\mathbb R ^{n} \times \mathbb R ^{m}\).
2.3 Proximal map for nonconvex functions
We need to recall the fundamental Moreau proximal map for a nonconvex function (see [31, page 20]). It is at the heart of the PALM algorithm.
Let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , \infty \right] \) be a proper and lower semicontinuous function. Given \(x \in \mathbb R ^{d}\) and \(t > 0\), the proximal map associated to \(\sigma \) and its corresponding Moreau proximal envelope are defined respectively by:
and
Proposition 2
(Welldefinedness of proximal maps) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , \infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} \sigma > \infty \). Then, for every \(t \in \left( 0 , \infty \right) \) the set \(\hbox {prox}_{\frac{1}{t}}^{\sigma }\left( x\right) \) is nonempty and compact, in addition \(m^{\sigma }\left( x , t\right) \) is finite and continuous in \(\left( x , t\right) \).
Note that here \(\hbox {prox}_{t}^{\sigma }\) is a setvalued map. When \(\sigma := \delta _{X}\), the indicator function of a nonempty and closed set \(X\), i.e., for the function \(\delta _{X} : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) defined, for all \(x \in \mathbb R ^{d}\), by
the proximal map reduces to the projection operator onto \(X\), defined by
The projection \(P_{X} : \mathbb R ^{d} \rightrightarrows \mathbb R ^{d}\) has nonempty values and defines in general a multivalued map, as opposed to the convex case where orthogonal projections are guaranteed to be singlevalued.
2.4 The Kurdyka–Łojasiewicz property
The Kurdyka–Łojasiewicz property plays a central role in our analysis. Below, we recall the essential elements. We begin with the following extension of Łojasiewicz gradient inequality [25] as introduced in [2] for nonsmooth functions. First, we introduce some notation. For any subset \(S \subset \mathbb R ^{d}\) and any point \(x \in \mathbb R ^{d}\), the distance from \(x\) to \(S\) is defined and denoted by
When \(S = \emptyset \), we have that \(\hbox {dist}\left( x , S\right) = \infty \) for all \(x\).
Let \(\eta \in \left( 0 , +\infty \right] \). We denote by \(\Phi _{\eta }\) the class of all concave and continuous functions \(\varphi : \left[ 0 , \eta \right) \rightarrow \mathbb R _{+}\) which satisfy the following conditions

(i)
\(\varphi \left( 0\right) = 0\);

(ii)
\(\varphi \) is \(C^{1}\) on \(\left( 0 , \eta \right) \) and continuous at \(0\);

(iii)
for all \(s \in \left( 0 , \eta \right) \): \(\varphi ^{\prime }\left( s\right) > 0\).
Now we define the Kurdyka–Łojasiewicz (KL) property.
Definition 3
(Kurdyka–Łojasiewicz property) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) be proper and lower semicontinuous.

(i)
The function \(\sigma \) is said to have the Kurdyka–Łojasiewicz (KL) property at \(\overline{u} \in {\hbox {dom}}\,{\partial \sigma } := \left\{ u \in \mathbb R ^{d} : \partial \sigma \left( u\right) \ne \emptyset \right\} \) if there exist \(\eta \in \left( 0 , +\infty \right] \), a neighborhood \(U\) of \(\overline{u}\) and a function \(\varphi \in \varPhi _{\eta }\), such that for all
$$\begin{aligned} u \in U \cap \left[ \sigma \left( \overline{u}\right) < \sigma \left( u\right) < \sigma \left( \overline{u}\right) + \eta \right] \!\,, \end{aligned}$$the following inequality holds
$$\begin{aligned} \varphi ^{\prime }\left( \sigma \left( u\right)  \sigma \left( \overline{u}\right) \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) \ge 1. \end{aligned}$$(2.4) 
(ii)
If \(\sigma \) satisfy the KL property at each point of \({\hbox {dom}}\,{\partial \sigma }\) then \(\sigma \) is called a KL function.
It is easy to establish that KL property holds in the neighborhood of noncritical points (see, e.g., [2]), thus the truly relevant aspect of this property is when \(\bar{u}\) is critical, i.e., when \(0 \in \partial \sigma \left( \bar{u}\right) \). In that case it warrants that \(\sigma \) is sharp up to a reparameterization of its values: “\(\sigma \) is amenable to sharpness”. Indeed inequality (2.4) can be proved to imply
for all convenient \(u\) (simply use the “onesided” chainrule [31, Theorem 10.6]). This means that the subgradients of the function \(u \rightarrow \varphi \circ \left( \sigma \left( u\right)  \sigma \left( \bar{u}\right) \right) \) have a norm greater than \(1\), no matter how close is the point \(u\) to the critical point \(\bar{u}\) (provided that \(\sigma \left( u\right) > \sigma \left( \bar{u}\right) \)). This property is called sharpness while the reparameterization function \(\varphi \) is called a desingularizing function of \(\sigma \) at \(\overline{u}\). As it is described further into detail, this geometrical feature has dramatic consequences in the study of firstorder descent methods (see also [3]).
A remarkable aspect of KL functions is that they are ubiquitous in applications, for example, semialgebraic, subanalytic and logexp are KL functions (see [1–3] and references therein). These facts originates in the pioneering and fundamental works of Łojasiewicz [25] and Kurdyka [22]; works which were recently extended to nonsmooth functions in [16, 17]. In the Appendix we recall a nonsmooth semialgebraic version of KL property, Theorem 3, which covers many problems arising in optimization and which plays a central role in the convergence analysis of our algorithm for the Nonnegative Matrix Factorization problem. For the reader’s convenience, other related facts and pertinent results are also summarized in the same appendix.
3 PALM algorithm and convergence analysis
3.1 The algorithm PALM
As outlined in the Introduction, PALM can be viewed as alternating the steps of the PFB scheme. It is wellknown that the proximal forward–backward scheme for minimizing the sum of a smooth function \(h\) with a nonsmooth one \(\sigma \) can simply be viewed as the proximal regularization of \(h\) linearized at a given point \(x^{k}\), i.e.,
that is, using the proximal map notation defined in (2.2), we get
Adopting this scheme on Problem \((M)\) we thus replace \(\varPsi \) in the iterations (1.1) and (1.2) (cf. the Introduction) by their approximations which are obtained through the proximal linearization of each subproblems, i.e., \(\varPsi \) is replaced by
and
Thus alternating minimization on the two blocks \(\left( x , y\right) \) yields the basis of the algorithm PALM we propose here.
PALM needs minimal assumptions to be analyzed.
Assumption 2

(i)
\(\inf _\mathbb{R ^{n} \times \mathbb R ^{m}} \varPsi > \infty ,\,\inf _\mathbb{R ^{n}} f > \infty \) and \(\inf _\mathbb{R ^{m}} g > \infty \).

(ii)
For any fixed \(y\) the function \(x \rightarrow H\left( x , y\right) \) is \(C^{1,1}_{L_{1}(y)}\), namely the partial gradient \(\nabla _{x} H\left( x , y\right) \) is globally Lipschitz with moduli \(L_{1}\left( y\right) \), that is
$$\begin{aligned} \left\ {\nabla _{x} H\left( x_{1} , y\right)  \nabla _{x} H\left( x_{2} , y\right) } \right\ \le L_{1}\left( y\right) \left\ {x_{1}  x_{2}} \right\ , \quad \forall \,x_{1} , x_{2} \in \mathbb R ^{n}. \end{aligned}$$Likewise, for any fixed \(x\) the function \(y \rightarrow H\left( x , y\right) \) is assumed to be \(C^{1,1}_{L_{2}(x)}\).

(iii)
For \(i = 1 , 2\) there exists \(\lambda _{i}^{} , \lambda _{i}^{+} > 0\) such that
$$\begin{aligned} \inf \{ L_{1}(y^{k}) : k \in \mathbb N \}&\ge \lambda _{1}^{} \quad \text {and} \quad \inf \{ L_{2}(x^{k}) : k \in \mathbb N \} \ge \lambda _{2}^{} \end{aligned}$$(3.5)$$\begin{aligned} \sup \{ L_{1}(y^{k}) : k \in \mathbb N \}&\le \lambda _{1}^{+} \quad \text {and} \quad \sup \{ L_{2}(x^{k}) : k \in \mathbb N \} \le \lambda _{2}^{+}. \end{aligned}$$(3.6) 
(iv)
\(\nabla H\) is Lipschitz continuous on bounded subsets of \(\mathbb R ^{n} \times \mathbb R ^{m}\). In other words, for each bounded subsets \(B_{1} \times B_{2}\) of \(\mathbb R ^{n} \times \mathbb R ^{m}\) there exists \(M > 0\) such that for all \(\left( x_{i} , y_{i}\right) \in B_{1} \times B_{2},\,i = 1 , 2\):
$$\begin{aligned}&\left\ {\left( \nabla _{x} H\left( x_{1} , y_{1}\right)  \nabla _{x} H\left( x_{2} , y_{2}\right) , \nabla _{y} H\left( x_{1} , y_{1}\right)  \nabla _{y} H\left( x_{2} , y_{2}\right) \right) } \right\ \nonumber \\&\quad \le M \left\ {\left( x_{1}  x_{2} , y_{1}  y_{2}\right) } \right\ \!. \end{aligned}$$(3.7)
A few words on Assumption 2 are now in order.
Remark 3

(i)
Assumption 2(i) ensures that Problem \((M)\) is infbounded. It also warrants that the algorithm PALM is well defined through the proximal maps formulas (3.3) and (3.4) (see Proposition 2).

(ii)
The partial Lipschitz properties required in Assumption 2(ii) are at the heart of PALM which is designed to fully exploit the blockLipschitz property of the problem at hand.

(iii)
The inequalities (3.5) in Assumption 2(iii) guarantees that the proximal steps in PALM remain always welldefined. As we describe below these two properties are not demanding at all. Indeed, consider a function \(H\) whose gradient is Lipschitz continuous blockwise as in Assumption 2(ii). Take now two arbitrary positive constants \(\mu _{1}^{}\) and \(\mu _{2}^{}\), replace the Lipschitz modulis \(L_{1}(y)\) and \(L_{2}(x)\) by \(L_{1}^{\prime }(y) = \max \left\{ L_{1}(y) , \mu _{1}^{} \right\} \) and \(L_{2}^{\prime }(x) = \max \left\{ L_{2}(x) , \mu _{2}^{} \right\} \), respectively. The functions \(L_{1}^{\prime }(y)\) and \(L_{2}^{\prime }(x)\) are still some Lipschitz moduli of \(\nabla _{x} H\left( \cdot , y\right) \) and \(\nabla _{y} H\left( x , \cdot \right) \), respectively. Moreover
$$\begin{aligned} \inf \left\{ L_{1}^{\prime }(y) : y \in \mathbb R ^{m} \right\} \ge \mu _{1}^{} \quad \text {and} \quad \inf \left\{ L_{2}^{\prime }(x) : x \in \mathbb R ^{n} \right\} \ge \mu _{2}^{}. \end{aligned}$$Thus the inequalities (3.5) are trivially fulfilled with these new Lipschitz modulis and with \(\lambda _{i}^{} = \mu _{i}^{}\) (\(i = 1 , 2\)).

(iv)
Assumption 2(iv) is satisfied whenever \(H\) is \(C^{2}\) as a direct consequence of the Mean Value Theorem. Similarly, the inequalities (3.6) in Assumption 2(iii), can be obtained by assuming that \(H\) is \(C^{2}\) and that the generated sequence \(\{ (x^{k} , y^{k}) \}_{k \in \mathbb N }\) is bounded.
Before deriving the convergence results for PALM, in the next subsection we outline our proof methodology.
3.2 An informal general proof recipe
Fix a positive integer \(N\). Let \(\varPsi : \mathbb R ^{N} \rightarrow \left( \infty , +\infty \right] \) be a proper and lower semicontinuous function which is bounded from below and consider the problem
Suppose we are given a generic algorithm \(\mathcal A \) which generates a sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) via the following:
The objective is to prove that the whole sequence generated by the algorithm \(\mathcal{A }\) converges to a critical point of \(\varPsi \).
In the light of [1, 3], we outline a general methodology which describes the main steps to achieve this goal. In particular we put in evidence how and when the KL property is entering in action. Basically, the methodology consists of three main steps.

(i)
Sufficient decrease property: Find a positive constant \(\rho _{1}\) such that
$$\begin{aligned} \rho _{1}\left\ {z^{k + 1}  z^{k}} \right\ ^{2} \le \varPsi (z^{k})  \varPsi (z^{k + 1}), \quad \forall \,k = 0 , 1 , \ldots . \end{aligned}$$ 
(ii)
A subgradient lower bound for the iterates gap: Assume that the sequence generated by the algorithm \(\mathcal{A }\) is bounded.^{Footnote 1} Find another positive constant \(\rho _{2}\), such that
$$\begin{aligned} \left\ {w^{k+1}} \right\ \le \rho _{2}\left\ {z^{k+1}  z^{k }} \right\ \!, \quad w^{k} \in \partial \varPsi \left( z^{k}\right) \!, \quad \forall \,k = 0 , 1 , \ldots . \end{aligned}$$
These first two requirements above are quite standard and shared by essentially most descent algorithms, see e.g., [2]. Note that when properties (i) and (ii) hold, then for any algorithm \(\mathcal A \) one can show that the set of accumulations points is a nonempty, compact and connected set (see Lemma 5 (iii) for the case of PALM). One then need to prove that it is a subset of the critical points of \(\varPsi \) on which \(\varPsi \) is constant.
Apart from the aspects concerning the structure of the limiting set (nonempty, compact and connected), these first two steps depend on the structure of the specific chosen algorithm \(\mathcal A \). Therefore the constants \(\rho _{1}\) and \(\rho _{2}\) are fit to the current given algorithm. The third step, needed to complete our goal, namely to establish global convergence to a critical point of \(\varPsi \), doesn’t depend at all on the structure of the specific chosen algorithm \(\mathcal A \).
Rather, it requires an additional assumption on the class of functions \(\varPsi \) to be minimized. It is here that the KL property enters in action: relying on the descent property of the algorithm, and on a uniformization of the KL property (see Lemma 6) below, the third and last step amounts to:

(iii)
Using the KL property: Assume that \(\varPsi \) is a KL function and show that the generated sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence.
This basic approach can in principle be applied to any algorithm and is now systematically developed for PALM.
3.3 Basic convergence properties
We first establish some basic properties of PALM under our Assumptions 1 and 2. We begin by recalling the wellknown and important descent lemma for smooth functions, see e.g., [12, 28].
Lemma 1
(Descent lemma) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)Lipschitz continuous. Then,
The main computational step of PALM involves a proximal map step of a proper and lower semicontinuous but nonconvex function. The next result shows that the wellknown key inequality for the proximalgradient step in the convex setting (see, e.g., [9]) can be easily extended to the nonconvex setting to warrant sufficient decrease of the objective function after a proximal map step.
Lemma 2
(Sufficient decrease property) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)Lipschitz continuous and let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} \sigma > \infty \). Fix any \(t > L_{h}\). Then, for any \(u \in {\hbox {dom}}\,{\sigma }\) and any \(u^{+} \in \mathbb R ^{d}\) defined by
we have
Proof
First, it follows immediately from Proposition 2 that \(u^{+}\) is welldefined. By the definition of the proximal map given in (2.2) we get
and hence in particular, taking \(v = u\), we obtain
Invoking first the descent lemma (see Lemma 1) for \(h\), and using then inequality (3.11), we get
This proves that (3.10) holds. \(\square \)
Remark 4
(i) The above result is valid for any \(t > 0\). The condition \(t > L_{h}\) ensures a sufficient decrease in the value of \(h(u^{+}) + \sigma (u^{+})\).

(ii)
If the function \(\sigma \) is taken as the indicator function \(\delta _{X}\) of a nonempty, closed and nonconvex subset \(X\), then the proximal map reduces to the projection \(P_{X}\), that is
$$\begin{aligned} u^{+} \in P_{X}\left( u  \frac{1}{t}\nabla h\left( u\right) \right) \end{aligned}$$and we recover the sufficient decrease property of the Projected Gradient Method (PGM) in the nonconvex case.

(iii)
In the case when \(\sigma \) is a convex, proper and lower semicontinuous function, we can take \(t = L_{h}\) (and even \(t > \frac{L_h}{2}\)). Indeed, in that case, we can apply the global optimality condition characterizing \(u^{+}\) defined in (3.9) to get instead of (3.11) the stronger inequality
$$\begin{aligned} \sigma \left( u^{+}\right) + \left\langle {u^{+}  u , \nabla h\left( u\right) } \right\rangle \le \sigma \left( u\right)  t\left\ {u^{+}  u} \right\ ^{2} \end{aligned}$$(3.12)which together with the descent lemma (see Lemma 1) yields
$$\begin{aligned} h\left( u^{+}\right) + \sigma \left( u^{+}\right) \le h\left( u\right) + \sigma \left( u\right)  \left( t  \frac{L_{h}}{2}\right) \left\ {u^{+}  u} \right\ ^{2}\,. \end{aligned}$$ 
(iv)
In view of item (iii), when applying PALM with convex functions \(f\) and \(g\), the constants \(c_{k}\) and \(d_{k},\,k \in \mathbb N \), can simply be taken as \(L_{1}(y^{k})\) and \(L_{2}(x^{k + 1})\), respectively.
Equipped with this result, we can now establish some useful properties for PALM under our Assumptions 1 and 2. In the sequel for convenience we often use the notation
Lemma 3
(Convergence properties) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM. The following assertions hold.

(i)
The sequence \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is nonincreasing and in particular
$$\begin{aligned} \frac{\rho _{1}}{2}\left\ {z^{k + 1}  z^{k}} \right\ ^{2} \le \varPsi (z^{k})  \varPsi (z^{k + 1}),\quad \forall \,k\ge 0, \end{aligned}$$(3.13)where
$$\begin{aligned} \rho _{1} = \min \left\{ \left( \gamma _{1}  1\right) \lambda _{1}^{} ,\left( \gamma _{2}  1\right) \lambda _{2}^{} \right\} . \end{aligned}$$ 
(ii)
We have
$$\begin{aligned} \sum _{k = 1}^{\infty } \left\ {x^{k + 1}  x^{k}} \right\ ^{2} + \left\ {y^{k + 1}  y^{k}} \right\ ^2 = \sum _{k = 1}^{\infty } \left\ {z^{k + 1}  z^{k}} \right\ ^{2} < \infty ,\quad \end{aligned}$$(3.14)and hence \(\lim _{k \rightarrow \infty } \left\ {z^{k + 1}  z^{k}} \right\ = 0\).
Proof

(i)
Fix \(k \ge 0\). Under our Assumption 2(ii), the functions \(x \rightarrow H\left( x , y\right) \) (\(y\) is fixed) and \(y \rightarrow H\left( x , y\right) \) (\(x\) is fixed) are differentiable and have a Lipschitz gradient with modulis \(L_{1}\left( y\right) \) and \(L_{2}\left( x\right) \), respectively. Using the iterative steps (3.3) and (3.4), applying Lemma 2 twice, first with \(h\left( \cdot \right) := H\left( \cdot , y^{k}\right) ,\,\sigma := f\) and \(t := c_{k} > L_{1}(y^{k})\), and secondly with \(h\left( \cdot \right) := H\left( x^{k + 1} , \cdot \right) ,\,\sigma := g\) and \(t := d_{k} > L_{2}(x^{k + 1})\), we obtain successively
$$\begin{aligned} H\left( x^{k +1} , y^{k}\right) + f\left( x^{k + 1}\right)&\le H\left( x^{k} , y^{k}\right) + f\left( x^{k}\right) \\&\quad  \frac{1}{2}\left( c_{k}  L_{1}\left( y^{k}\right) \right) \left\ {x^{k + 1}  x^{k}} \right\ ^{2} \\&= H\left( x^{k} , y^{k}\right) + f\left( x^{k}\right) \\&\quad  \frac{1}{2}\left( \gamma _{1}  1\right) L_{1}\left( y^{k}\right) \left\ {x^{k + 1}  x^{k}} \right\ ^{2}\!\,, \end{aligned}$$and
$$\begin{aligned} H\left( x^{k + 1} , y^{k + 1}\right) + g\left( y^{k + 1}\right)&\le H\left( x^{k +1} , y^{k}\right) + g\left( y^{k}\right) \\&\quad  \frac{1}{2}\left( d_{k}  L_{2}\left( x^{k + 1}\right) \right) \left\ {y^{k + 1}  y^{k}} \right\ ^{2} \\&= H\left( x^{k +1} , y^{k}\right) + g\left( y^{k}\right) \\&\quad  \frac{1}{2}\left( \gamma _{2}  1\right) L_{2}\left( x^{k + 1}\right) \left\ {y^{k + 1}  y^{k}} \right\ ^{2}\!\,. \end{aligned}$$Adding the above two inequalities, we thus obtain for all \(k \ge 0\),
$$\begin{aligned} \varPsi \left( z^{k}\right) \!\! \varPsi \left( z^{k + 1}\right) \!&= \! H\left( x^{k} , y^{k}\right) \!+\! f\left( x^{k}\right) \!+\! g\left( y^{k}\right) \!\! H\left( x^{k + 1} , y^{k + 1}\right) \nonumber \\& f\left( x^{k + 1}\right) \!\! g\left( y^{k + 1}\right) \nonumber \\&\ge \frac{1}{2}\left( \gamma _{1}  1\right) L_{1}\left( y^{k}\right) \left\ {x^{k + 1}  x^{k}} \right\ ^{2}\nonumber \\&\quad + \frac{1}{2}\left( \gamma _{2}  1\right) L_{2}\left( x^{k + 1}\right) \left\ {y^{k + 1}  y^{k}} \right\ ^{2}\!. \end{aligned}$$(3.15)From (3.15) it follows that the sequence \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is nonincreasing, and since \(\varPsi \) is assumed to be bounded from below (see Assumption 2(i)), it converges to some real number \(\underline{\varPsi }\). Moreover, using the facts that \(L_{1}(y^{k}) \ge \lambda _{1}^{} > 0\) and \(L_{2}(x^{k + 1}) \ge \lambda _{2}^{} > 0\) (see Assumption 2(iii)), we get for all \(k \ge 0\):
$$\begin{aligned}&\frac{1}{2}\left( \gamma _{1}  1\right) L_{1}\left( y^{k}\right) \left\ {x^{k + 1}  x^{k}} \right\ ^{2}+ \frac{1}{2}\left( \gamma _{2}  1\right) L_{2}\left( x^{k + 1}\right) \left\ {y^{k + 1}  y^{k}} \right\ ^{2} \nonumber \\&\quad \ge \frac{1}{2}\left( \gamma _{1}  1\right) \lambda _{1}^{}\left\ {x^{k + 1}  x^{k}} \right\ ^{2}+ \frac{1}{2}\left( \gamma _{2}  1\right) \lambda _{2}^{}\left\ {y^{k + 1}  y^{k}} \right\ ^{2} \nonumber \\&\quad \ge \frac{\rho _{1}}{2}\left\ {x^{k + 1}  x^{k}} \right\ ^{2} + \frac{\rho _{1}}{2}\left\ {y^{k + 1}  y^{k}} \right\ ^{2}. \end{aligned}$$(3.16)Combining (3.15) and (3.16) yields the following
$$\begin{aligned} \frac{\rho _{1}}{2}\left\ {z^{k + 1}  z^{k}} \right\ ^{2} \le \varPsi (z^{k})  \varPsi (z^{k + 1}), \end{aligned}$$(3.17)and assertion (i) is proved.

(ii)
Let \(N\) be a positive integer. Summing (3.17) from \(k = 0\) to \(N  1\) we also get
$$\begin{aligned} \sum _{k = 0}^{N  1} \left\ {x^{k + 1}  x^{k}} \right\ ^{2} + \left\ {y^{k + 1}  y^{k}} \right\ ^{2}&= \sum _{k = 0}^{N  1} \left\ {z^{k + 1}  z^{k}} \right\ ^{2}\\&\le \frac{2}{\rho _{1}}(\varPsi (z^{0})  \varPsi (z^{N})) \\&\le \frac{2}{\rho _{1}}(\varPsi (z^{0})  \underline{\varPsi }\,). \end{aligned}$$Taking the limit as \(N \rightarrow \infty \), we obtain the desired assertion (ii).\(\square \)
3.4 Approaching the set of critical points
In order to generate sequences approaching the set of critical points, we first prove the following result.
Lemma 4
(A subgradient lower bound for the iterates gap) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. For each positive integer \(k\), define
and
Then \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \) and there exists \(M > 0\) such that
where
Proof
Let \(k\) be a positive integer. From the definition of the proximal map (2.2) and the iterative step (3.3) we have
Writing down the optimality condition yields
where \(u^{k} \in \partial f\left( x^{k}\right) \). Hence
Similarly from the iterative step (3.4) we have
Again, writing down the optimality condition yields
where \(v^{k} \in \partial g\left( y^{k}\right) \). Hence
It is clear, from Proposition 1, that
From all these facts we obtain that \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \).
We now have to estimate the norms of \(A_{x}^{k}\) and \(A_{y}^{k}\). Since \(\nabla H\) is Lipschitz continuous on bounded subsets of \(\mathbb R ^{n} \times \mathbb R ^{m}\) (see Assumption 2(iv)) and since we assumed that \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is bounded, there exists \(M > 0\) such that
The moduli \(L_{1}\left( y^{k  1}\right) \) being bounded from above by \(\lambda _{1}^{+}\) (see Assumption 2(iii)), we get that \(c_{k  1} \le \gamma _{1}\lambda _{1}^{+}\) and thence
On the other hand, from the Lipschitz continuity of \(\nabla _{y}H\left( x , \cdot \right) \) (see Assumption 2(ii)), we have that
Since \(L_{2}\left( x^{k}\right) \) is bounded from above by \(\lambda _{2}^{+}\) (see Assumption 2(iii)) we get that \(d_{k  1} \le \gamma _{2}\lambda _{2}^{+}\) and thence
Summing up these estimations, we get the desired result in (3.20), that is,
This completes the proof. \(\square \)
In the following result, we summarize several properties of the limit point set. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM from a starting point \(z^{0}\). The set of all limit points is denoted by \(\omega \left( z^{0}\right) \), i.e.,
Lemma 5
(Properties of the limit point set \(\omega \left( z^{0}\right) \)) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. The following assertions hold.

(i)
\(\emptyset \ne \omega \left( z^{0}\right) \subset \hbox {crit}\,{\varPsi }\)

(ii)
We have
$$\begin{aligned} \lim _{k \rightarrow \infty } \hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) = 0. \end{aligned}$$(3.25) 
(iii)
\(\omega \left( z^{0}\right) \) is a nonempty, compact and connected set.

(iv)
The objective function \(\varPsi \) is finite and constant on \(\omega \left( z^{0}\right) \).
Proof

(i)
Let \(z^{*} = \left( x^{*} , y^{*}\right) \) be a limit point of \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N } = \left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\). This means that there is a subsequence \(\left\{ \left( x^{k_{q}} , y^{k_{q}}\right) \right\} _{q \in \mathbb N }\) such that \(\left( x^{k_{q}} , y^{k_{q}}\right) \rightarrow \left( x^{*} , y^{*}\right) \) as \(q \rightarrow \infty \). Since \(f\) and \(g\) are lower semicontinuous (see Assumption 1(i)), we obtain that
$$\begin{aligned} \liminf _{{q} \rightarrow {\infty }} f\left( x^{k_{q}}\right) \ge f\left( x^{*}\right) \quad \text {and} \quad \liminf _{{q} \rightarrow {\infty }} g\left( y^{k_{q}}\right) \ge g\left( y^{*}\right) . \end{aligned}$$(3.26)From the iterative step (3.3), we have for all integer \(k\)
$$\begin{aligned} x^{k + 1} \in \mathrm{argmin }_{x \in \mathbb R ^{n}} \left\{ \left\langle {x  x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\ {x  x^{k}} \right\ ^{2} + f\left( x\right) \right\} \!. \end{aligned}$$Thus letting \(x = x^{*}\) in the above, we get
$$\begin{aligned}&\left\langle {x^{k + 1}  x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\ {x^{k + 1}  x^{k}} \right\ ^{2} + f\left( x^{k + 1}\right) \\&\quad \le \left\langle {x^{*}  x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\ {x^{*}  x^{k}} \right\ ^{2}+ f\left( x^{*}\right) \!. \end{aligned}$$Choosing \(k = k_{q}  1\) in the above inequality and letting \(q\) goes to \(\infty \), we obtain
$$\begin{aligned} \limsup _{q \rightarrow \infty } f\left( x^{k_{q}}\right)&\le \limsup _{q \rightarrow \infty } \Bigg (\left\langle {x^{*}  x^{k_{q}  1} , \nabla _{x} H\left( x^{k_{q}  1} , y^{k_{q}  1}\right) } \right\rangle \nonumber \\&+ \frac{c_{k}}{2}\left\ {x^{*}  x^{k_{q}  1}} \right\ ^{2}\Bigg ) + f\left( x^{*}\right) \!, \end{aligned}$$(3.27)where we have used the facts that both sequences \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) and \(\left\{ {c}_{{k}}\right\} _{{k} \in \mathbb N }\) are bounded, \(\nabla H\) continuous and that the distance between two successive iterates tends to zero (see Lemma 3(ii)). For that very reason we also have \(x^{k_{q}  1} \rightarrow x^{*}\) as \(q \rightarrow \infty \), hence (3.27) reduces to \( \limsup _{q \rightarrow \infty } f\left( x^{k_{q}}\right) \le f\left( x^{*}\right) \). Thus, in view of (3.26), \(f\left( x^{k_{q}}\right) \) tends to \(f\left( x^{*}\right) \) as \(q \rightarrow \infty \). Arguing similarly with \(g\) and \(y^{k}\) we thus finally obtain
$$\begin{aligned} \lim _{q\rightarrow \infty } \varPsi \left( x^{k_{q}} , y^{k_{q}}\right)&= \lim _{q\rightarrow \infty } \left\{ H\left( x^{k_{q}} , y^{k_{q}}\right) + f\left( x^{k_{q}}\right) + g\left( y^{k_{q}}\right) \right\} \\&= H\left( x^{*} , y^{*}\right) + f\left( x^{*}\right) + g\left( y^{*}\right) \\&= \varPsi \left( x^{*} , y^{*}\right) \!. \end{aligned}$$On the other hand we know from Lemmas 3(ii) and 4 that \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \) and \(\left( A_{x}^{k} , A_{y}^{k}\right) \rightarrow \left( 0 , 0\right) \) as \(k \rightarrow \infty \). The closedness property of \(\partial \varPsi \) (see Remark 1(ii)) implies thus that \(\left( 0 , 0\right) \in \partial \varPsi \left( x^{*} , y^{*}\right) \). This proves that \(\left( x^{*} , y^{*}\right) \) is a critical point of \(\varPsi \).

(ii)
This item follows as an elementary consequence of the definition of limit points.

(iii)
Set \(\omega = \omega \left( z^{0}\right) \). Observe that \(\omega \) can be viewed as an intersection of compact sets
$$\begin{aligned} \omega = \bigcap _{q \in \mathbb N } \, \overline{\bigcup _{k \ge q} \left\{ z^{k} \right\} }\!, \end{aligned}$$so it is also compact. Towards a contradiction, we assume that \(\omega \) is not connected. Whence there exist two nonempty and closed disjoint subsets \(A\) and \(B\) of \(\omega \) such that \(\omega = A \cup B\). Consider the function \(\gamma : \mathbb R ^{n} \times \mathbb R ^{m} \rightarrow \mathbb R \) defined by
$$\begin{aligned} \gamma \left( z\right) = \frac{{\hbox {dist}}\left( z , A\right) }{{\hbox {dist}}\left( z , A\right) + {\hbox {dist}}\left( z , B\right) } \end{aligned}$$for all \(z \in \mathbb R ^{n} \times \mathbb R ^{m}\). Due to the closedness properties of \(A\) and \(B\), the function \(\gamma \) is well defined, it is also continuous. Note that \(A = \gamma ^{1}\left( \left\{ 0 \right\} \right) = \left[ \gamma = 0\right] \) and \(B = \gamma ^{1}\left( \left\{ 1 \right\} \right) = \left[ \gamma = 1\right] \). Setting \(U = \left[ \gamma < 1/4\right] \) and \(V = \left[ \gamma > 3/4\right] \), we obtain, respectively, two open neighborhoods of the compact sets \(A\) and \(B\). There exists an integer \(k_{0}\) such that \(z^{k}\) either belongs to \(U\) or to \(V\) for all \(k \ge k_{0}\). Supposing the contrary, there would exists a subsequence \(\left\{ z^{k_{q}} \right\} _{q \in \mathbb N }\) evolving in the complement of the open set \(U \cup V\). This would imply the existence of a limit point \(z^{*}\) of \(z^{k}\) in \(\mathbb R ^{n}{\setminus }\left( U \cup V\right) \) which is impossible. Put \(r_{k} = \gamma \left( z^{k}\right) \) for each integer \(k\). The sequence \(\left\{ {r}_{{k}}\right\} _{{k} \in \mathbb N }\) satisfies:

1.
\(r_{k} \notin \left[ 1/4 , 3/4\right] \) for all \(k \ge k_0\).

2.
There exist infinitely many \(k\) such that \(r_{k} < 1/4\).

3.
There exist infinitely many \(k\) such that \(r_{k} > 3/4\).

4.
The difference \(\left r_{k + 1}  r_{k}\right \) tends to \(0\) as \(k\) goes to infinity.
The last point follows from the fact that \(\gamma \) is uniformly continuous on bounded sets together with the assumption that \(\left\ {z^{k + 1}  z^{k}} \right\ \rightarrow 0\). Clearly there exist no sequence complying with the above requirements. The set \(\omega \) is therefore connected.

1.

(iv)
Denote by \(l\) the finite limit of \(\varPsi \left( z^{k}\right) \) as \(k\) goes to infinity. Take \(z^{*}\) in \(\omega \left( z^{0}\right) \). There exists a subsequence \(z^{k_{q}}\) converging to \(z^{*}\) as \(q\) goes to infinity. On one hand the sequence \(\left\{ \varPsi \left( z^{k_{q}}\right) \right\} _{q \in \mathbb N }\) converges to \(l\) and on the other hand (as we proved in assertion (i)) we have \(\varPsi \left( z^{*}\right) = l\). Hence the restriction of \(\varPsi \) to \(\omega \left( z^{0}\right) \) equals \(l\).
\(\square \)
Remark 5
Note that properties (ii) and (iii) in Lemma 5 are generic for any sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) satisfying \(\left\ {z_{k + 1}  z_{k}} \right\ \rightarrow 0\) as \(k\) goes to infinity.
Our objective is now to prove that the sequence which is generated by PALM converges to a critical point of Problem \((M)\). For that purpose we consider now that the objective of Problem \((M)\) is a KL function, which is the case for example if \(f , g\) and \(H\) are semialgebraic (see the Appendix for more details).
3.5 Convergence of PALM to critical points of problem \((M)\)
Before proving our main theorem the following result, which was established in [1, Lemma 1] for the Łojasiewicz property, would be adjusted within the more general KL property as follows.
Lemma 6
(Uniformized KL property) Let \(\varOmega \) be a compact set and let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , \infty \right] \) be a proper and lower semicontinuous function. Assume that \(\sigma \) is constant on \(\varOmega \) and satisfies the KL property at each point of \(\varOmega \). Then, there exist \(\varepsilon > 0,\,\eta > 0\) and \(\varphi \in \Phi _{\eta }\) such that for all \(\overline{u}\) in \(\varOmega \) and all \(u\) in the following intersection:
one has,
Proof
Denote by \(\mu \) the value of \(\sigma \) over \(\varOmega \). The compact set \(\varOmega \) can be covered by a finite number of open balls \(B\left( u_{i} , \varepsilon _{i}\right) \) (with \(u_{i} \in \varOmega \) for \(i = 1 , \ldots , p\)) on which the KL property holds. For each \(i = 1 , \ldots , p\), we denote the corresponding desingularizing function by \(\varphi _{i} : \left[ 0 , \eta _{i}\right) \rightarrow \mathbb R _{+}\) with \(\eta _{i} > 0\). For each \(u \in B\left( u_{i} , \varepsilon _{i}\right) \cap \left[ \mu < \sigma < \mu + \eta _{i}\right] \), we thus have
Choose \(\varepsilon > 0\) sufficiently small so that
Set \(\eta = \min \left\{ \eta _{i} : \; i = 1 , \ldots , p \right\} >0\) and
Observe now that, for all \(u\) in \(U_{\varepsilon } \bigcap \left[ \mu < \sigma < \mu + \eta \right] \), we obtain (cf. 3.30, 3.31)
This completes the proof. \(\square \)
Now we will prove the main result.
Theorem 1
(A finite length property) Suppose that \(\varPsi \) is a KL function such that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. The following assertions hold.

(i)
The sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,
$$\begin{aligned} \sum _{k = 1}^{\infty } \left\ {z^{k + 1}  z^{k}} \right\ < \infty . \end{aligned}$$(3.32) 
(ii)
The sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) converges to a critical point \(z^{*} = \left( x^{*} , y^{*}\right) \) of \(\varPsi \).
Proof
Since \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is bounded there exists a subsequence \(\left\{ z^{k_{q}} \right\} _{q \in \mathbb N }\) such that \(z^{k_{q}} \rightarrow \overline{z}\) as \(q \rightarrow \infty \). In a similar way as in Lemma 5(i) we get that
If there exists an integer \(\bar{k}\) for which \(\varPsi \left( z^{\bar{k}}\right) = \varPsi \left( \overline{z}\right) \) then the decreasing property (3.13) would imply that \(z^{\bar{k} + 1} = z^{\bar{k}}\). A trivial induction show then that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is stationary and the announced results are obvious. Since \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is a nonincreasing sequence, it is clear from (3.33) that \(\varPsi \left( \overline{z}\right) < \varPsi \left( z^{k}\right) \) for all \(k > 0\). Again from (3.33) for any \(\eta > 0\) there exists a nonnegative integer \(k_{0}\) such that \(\varPsi \left( z^{k}\right) < \varPsi \left( \overline{z}\right) + \eta \) for all \(k > k_{0}\). From (3.25) we know that \(\lim _{k \rightarrow \infty } \hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) = 0\). This means that for any \(\varepsilon > 0\) there exists a positive integer \(k_{1}\) such that \(\hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) < \varepsilon \) for all \(k > k_{1}\). Summing up all these facts, we get that \(z^{k}\) belongs to the intersection in (3.28) for all \(k > l := \max \left\{ k_{0} , k_{1} \right\} \).

(i)
Since \(\omega \left( z^{0}\right) \) is nonempty and compact (see Lemma 5(ii)), and since \(\varPsi \) is finite and constant on \(\omega \left( z^{0}\right) \) (see Lemma 5(iv)), we can apply Lemma 6 with \(\varOmega = \omega \left( z^{0}\right) \). Therefore for any \(k > l\) we have
$$\begin{aligned} \varphi ^{\prime }\left( \varPsi \left( z^{k}\right)  \varPsi \left( \overline{z}\right) \right) \hbox {dist}\left( 0 , \partial \varPsi \left( z^{k}\right) \right) \ge 1. \end{aligned}$$(3.34)This makes sense since we know that \(\varPsi \left( z^{k}\right) > \varPsi \left( \overline{z}\right) \) for any \(k > l\). From Lemma 4 we get that
$$\begin{aligned} \varphi ^{\prime }\left( \varPsi \left( z^{k}\right)  \varPsi \left( \overline{z}\right) \right) \ge \frac{1}{2M + 3\rho _{2}}\left\ {z^{k}  z^{k  1}} \right\ ^{1}. \end{aligned}$$(3.35)On the other hand, from the concavity of \(\varphi \) we get that
$$\begin{aligned}&\varphi \left( \varPsi \left( z^{k}\right)  \varPsi \left( \overline{z}\right) \right)  \varphi \left( \varPsi \left( z^{k + 1}\right)  \varPsi \left( \overline{z}\right) \right) \nonumber \\&\quad \ge \varphi ^{\prime }\left( \varPsi \left( z^{k}\right)  \varPsi \left( \overline{z}\right) \right) \left( \varPsi \left( z^{k}\right) \!\! \varPsi \left( z^{k + 1}\right) \right) . \end{aligned}$$(3.36)For convenience, we define for all \(p , q \in \mathbb N \) and \(\overline{z}\) the following quantities
$$\begin{aligned} \varDelta _{p , q} : = \varphi \left( \varPsi \left( z^{p}\right)  \varPsi \left( \overline{z}\right) \right)  \varphi \left( \varPsi \left( z^{q}\right)  \varPsi \left( \overline{z}\right) \right) \!\,, \end{aligned}$$and
$$\begin{aligned} C : = \frac{2\left( 2M + 3\rho _{2}\right) }{\rho _{1}} \in \left( 0 , \infty \right) \!\,. \end{aligned}$$Combining Lemma 3(i) with (3.35) and (3.36) yields for any \(k > l\) that
$$\begin{aligned} \varDelta _{k , k + 1} \ge \frac{\left\ {z^{k + 1}  z^{k}} \right\ ^{2}}{C\left\ {z^{k}  z^{k  1}} \right\ }, \end{aligned}$$(3.37)and hence
$$\begin{aligned} \left\ {z^{k + 1}  z^{k}} \right\ ^{2} \le C\varDelta _{k , k + 1}\left\ {z^{k}  z^{k  1}} \right\ . \end{aligned}$$Using the fact that \(2\sqrt{\alpha \beta } \le \alpha + \beta \) for all \(\alpha , \beta \ge 0\), we infer
$$\begin{aligned} 2\left\ {z^{k + 1}  z^{k}} \right\ \le \left\ {z^{k}  z^{k  1}} \right\ + C\varDelta _{k , k + 1}. \end{aligned}$$(3.38)Let us now prove that for any \(k > l\) the following inequality holds
$$\begin{aligned} \sum _{i = l + 1}^{k} \left\ {z^{i + 1}  z^{i}} \right\ \le \left\ {z^{l + 1}  z^{l}} \right\ + C\varDelta _{l + 1 , k + 1}. \end{aligned}$$Summing up (3.38) for \(i = l + 1 , \ldots , k\) yields
$$\begin{aligned} 2\sum _{i = l + 1}^{k} \left\ {z^{i + 1}  z^{i}} \right\&\le \sum _{i = l + 1}^{k} \left\ {z^{i}  z^{i  1}} \right\ + C\sum _{i = l + 1}^{k} \varDelta _{i , i + 1} \\&\le \sum _{i = l + 1}^{k} \left\ {z^{i + 1}  z^{i}} \right\ + \left\ {z^{l + 1}  z^{l}} \right\ + C\sum _{i = l + 1}^{k} \varDelta _{i , i + 1} \\&= \sum _{i = l + 1}^{k} \left\ {z^{i + 1}  z^{i}} \right\ + \left\ {z^{l + 1}  z^{l}} \right\ + C\varDelta _{l + 1 , k + 1} \end{aligned}$$where the last inequality follows from the fact that \(\varDelta _{p , q} + \varDelta _{q , r} = \varDelta _{p , r}\) for all \(p , q , r \in \mathbb N \). Since \(\varphi \ge 0\), we thus have for any \(k > l\) that
$$\begin{aligned} \sum _{i = l + 1}^{k} \left\ {z^{i + 1}  z^{i}} \right\ \le \left\ {z^{l + 1}  z^{l}} \right\ + C\varphi \left( \varPsi \left( z^{l + 1}\right)  \varPsi \left( \overline{z}\right) \right) \!\,. \end{aligned}$$This easily shows that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,
$$\begin{aligned} \sum _{k = 1}^{\infty } \left\ {z^{k + 1}  z^{k}} \right\ < \infty . \end{aligned}$$(3.39) 
(ii)
It is clear that (3.39) implies that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence and hence is a convergent sequence. Indeed, with \(q > p > l\) we have
$$\begin{aligned} z^{q}  z^{p} = \sum _{k = p}^{q  1} \left( z^{k + 1}  z^{k}\right) \end{aligned}$$hence
$$\begin{aligned} \left\ {z^{q}  z^{p}} \right\ = \left\ {\sum _{k = p}^{q  1} \left( z^{k + 1}  z^{k}\right) } \right\ \le \sum _{k = p}^{q  1} \left\ {z^{k + 1}  z^{k}} \right\ . \end{aligned}$$Since (3.39) implies that \(\sum _{k = l + 1}^{\infty } \left\ {z^{k + 1}  z^{k}} \right\ \) converges to zero as \(l \rightarrow \infty \), it follows that \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence and hence is a convergent sequence. Now the result follows immediately from Lemma 5(i).
This completes the proof. \(\square \)
Remark 6

(i)
The boundedness assumption on the generated sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) holds in several scenarios such as when the functions \(f\) and \(g\) have bounded level sets. For a few more scenarios see [2].

(ii)
An important and fundamental case of application of Theorem 1 is when the data functions \(f , g\) and \(H\) are semialgebraic. Observe also that the desingularizing function for semialgebraic problems can be chosen to be of the form
$$\begin{aligned} \varphi \left( s\right) = cs^{1  \theta }, \end{aligned}$$(3.40)where \(c\) is positive real number and \(\theta \) belongs to \(\left[ 0 , 1\right) \) (see [1] for more details). As explained below, this fact impacts the convergence rate of the method.
If the desingularizing function \(\varphi \) of \(\varPsi \) is of the form (3.40), then, as in [1] the following estimations hold.

(i)
If \(\theta = 0\) then the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) converges in a finite number of steps.

(ii)
If \(\theta \in \left( 0 , 1/2\right] \) then there exist \(\omega > 0\) and \(\tau \in \left[ 0 , 1\right) \) such that \(\left\ {z^{k}  \overline{z}} \right\ \le \omega \, \tau ^{k}\).

(iii)
If \(\theta \in \left( 1/2 , 1\right) \) then there exist \(\omega > 0\) such that
$$\begin{aligned} \left\ {z^{k}  \overline{z}} \right\ \le \omega \, k^{\frac{1  \theta }{2\theta  1}}. \end{aligned}$$
3.6 Extension of PALM for \(p\) blocks
The simple structure of PALM allows to extend it to the more general setting involving \(p > 2\) blocks for which Theorem 1 holds. This is briefly outlined below. Suppose that our optimization problem is now given as
where \(H: \mathbb R ^{N} \rightarrow \mathbb R \) with \(N = \sum _{i = 1}^{p} n_{i}\) is assumed to be \(C^{1}\) and each \(f_{i},\,i = 1 , \ldots , p\), is a proper and lowersemicontinuous function (this is exactly Assumption 1 for \(p > 2\)). We also assume that a modified version of Assumption 2 for \(p > 2\) blocks holds. In this case we denote by \(\nabla _{i} H\) the gradient of \(H\) with respect to variable \(x_{i},\,i = 1 , \ldots , p\). We denote by \(L_{i},\,i = 1 , \ldots , p\), the Lipschitz moduli of \(\nabla _{i} H\left( x_{1} , \ldots , \cdot , \ldots , x_{p}\right) \), that is, the gradient of \(H\) with respect to variable \(x_{i}\) when all \(x_{j},\,i \ne j\) (\(j = 1 , \ldots , p\)), are fixed. Similarly to Assumption 2(ii), it is clear that each \(L_{i},\,i = 1 , \ldots , p\), is a function of the \(p  1\) variables \(x_{j},\,j \ne i\) (\(j = 1 , \ldots , p\)).
For simplicity of the presentation of PALM for the case of \(p > 2\) blocks we will use the following notations. Denote \(x^{k} = \left( x_{1}^{k} , x_{2}^{k} , \ldots , x_{p}^{k}\right) \) and
Therefore \(x^{k}(0) = \left( x_{1}^{k} , x_{2}^{k} \ldots , x_{p}^{k}\right) = x^{k}\) and \(x^{k}(p) = \left( x_{1}^{k + 1} , x_{2}^{k + 1} , \ldots , x_{p}^{k + 1}\right) = x^{k + 1}\).
In this case the algorithm PALM minimizes \(\varPsi \) with respect to each \(x_{1} , \ldots , x_{p}\), taken in cyclic order while fixing the previous computed iterate. More precisely, starting with any \(\left( x_{1}^{0} , x_{2}^{0} , \ldots , x_{p}^{0}\right) \in \mathbb R ^{N}\), PALM generates a sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) via the following successively scheme:
where \(c_{i}^{k} = \gamma _{i}L_{i}\) and \(\gamma _{i} > 1\). Theorem 1 can then be applied for the \(p\)blocks version of PALM.
3.7 The proximal forward–backward scheme
When there is no \(y\) term, PALM reduces to PFB. In this case we have \(\varPsi \left( x\right) : = f\left( x\right) + h\left( x\right) \) (where \(h\left( x\right) \equiv H\left( x , 0\right) \)), and the proximal forward–backward scheme for minimizing \(\varPsi \) can simply be viewed as the proximal regularization of \(h\) linearized at a given point \(x^{k}\), i.e.,
A convergence result for the PFB scheme was first proved in [3] via the abstract framework developed in that paper. Our approach allows for a simpler and more direct proof. The sufficient decrease property of the sequence \(\left\{ \varPsi \left( x^{k}\right) \right\} _{k \in \mathbb N }\) follows directly from Lemma 2 with \(\sigma := f\) and \(t := t_{k} > L_{h}\). The second property “a subgradient lower bound for the iterates gap” follows from the Lipschitz continuity of \(\nabla h\). Now the globally convergent result follows immediately from Theorem 1. For the sake of completeness we record the result in the following proposition.
Proposition 3
(A convergence result of PFB) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)Lipschitz continuous and let \(f : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} f > \infty \). Assume that \( f+h\) is a KL function. Let \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PFB which is assumed to be bounded and let \(t_{k} > L_{h}\). The following assertions hold.

(i)
The sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,
$$\begin{aligned} \sum _{k = 1}^{\infty } \left\ {x^{k + 1}  x^{k}} \right\ < \infty . \end{aligned}$$ 
(ii)
The sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) converges to a critical point \(x^{*}\) of \(f+h\).
It is wellknown that PFB reduces to the projected gradient method (PGM) when \(f = \delta _{X}\) (where \(X\) is a nonempty, closed and nonconvex subset of \(\mathbb R ^{d}\)), i.e., PGM generates a sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) via
Thus when \(h + \delta _{X}\) is a KL function and \(h \in C_{L_{h}}^{1,1}\), global convergence of the sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) generated by PGM follows from Proposition 3, and recovers the result established in [3].
4 An application to matrix factorization problems
Matrix factorization problems play a fundamental role in data analysis and can be found in many disparate applications. A very large body of literature covers this active research area; for a recent account we refer for example to the book [18] and references therein.
In this section we show how PALM can be applied to a broad class of such problems to produce a globally convergent algorithm.
4.1 A broad class of matrix factorization problems
Let \(p, q, m, n\) and \(r\) be given integers. Define the following sets in the space of real matrices
where \(R_{1}\) and \(R_{2}\) are lower semicontinuous functions and \(\alpha , \beta \in \mathbb R _{+}\) are given parameters.
Roughly speaking, the matrix factorization (or approximation) problem consists in finding a product decomposition of a given matrix satisfying certain properties.
4.1.1 The problem
Given a matrix \(A \in \mathbb R ^{m \times n}\) and let \(r\) be an integer which is much smaller than \(\min \left\{ m , n \right\} \), find two matrices \(X \in \mathbb R ^{m \times r}\) and \(Y \in \mathbb R ^{r \times n}\) such that
The functions \(R_{1}\) and \(R_{2}\) are often used to describe some additional features of the matrices \(X\) and \(Y\), respectively, arising in a specific application at hand (see more below).
To solve the problem, we adopt the optimization approach, that is, we consider the nonconvex and nonsmooth minimization problem
where \(d : \mathbb R ^{m \times n}\times \mathbb R ^{m \times n} \rightarrow \mathbb R _{+}\) stands as a proximity function measuring the quality of the approximation, satisfying \(d\left( U, V\right) = 0\) if and only if \(U = V\). Note that \(d\left( \cdot , \cdot \right) \) is not necessarily symmetric and is not a metric.
Another way to formulate \((MF)\) is to consider its penalized version where the “hard” constraints are the candidates to be penalized, i.e., we consider the following penalized problem
where \(\mu _{1}\) and \(\mu _{2} > 0\) are penalty parameters. However, note that the penalty approach requires the tuning of the unknown penalty parameters which might be a difficult issue.
Both formulations can be written in the form of our general Problem \((M)\) with the obvious identifications for the corresponding \(H , f\) and \(g\), e.g.,
where
Thus, assuming that Assumptions 1 and 2 hold for the problem data quantified here via \([d, \mathcal{F}, \mathcal{G}]\), and that the functions \(d, R_{1}\) and \(R_{2}\) are KL functions, we can apply PALM and Theorem 1 to produce a globally convergent scheme to a critical point of \(\varPsi \) that solves the \((MF)\) problem. The above does not seem to have been addressed in the literature within such a general formalism. It covers a multitude of possible formulations from which many algorithms can be conceived by appropriate choices of the triple \([d, \mathcal{F}, \mathcal{G}]\) within a given application at hands. This is illustrated next on an important class of problems.
4.2 An algorithm for the sparse nonnegative matrix factorization
To be specific, in the sequel we focus on the classical case where the proximity measure is defined via the Frobenius norm
and for any matrix \(M\), the Frobenius norm is defined by
where \(\text{ Tr }\) is the Trace operator. Many other proximity measures can also be used, such as entropylike distances, see e.g., [18] and references therein.
Example 1
(Nonnegative matrix factorization) With \(\mathcal F = \mathbb R ^{m \times r}\) and \(\mathcal G = \mathbb R ^{r \times n}\), the Problem \((MF)\) reduces to the so called Nonnegative Matrix Factorization (NMF) problem
The nonnegative matrix factorization [23] has been at the heart of intense research applied to a variety of applications (see, e.g., [14] for applications in signal processing). More recently the introduction of “sparsity” has been of particular importance, and variants of NMF involving sparsity has also been considered in the literature (see, e.g., [20, 21]). Many, if not most, algorithms are based on the GaussSeidel like method for solving the NMF problem, see e.g., [11, 18, 24], and with quite limited convergence results. Moreover, extended versions of NMF with sparsity were considered via relaxations and corresponding convex reformulations solved by sophisticated and computationally demanding conic programming schemes, see e.g., [20, 21].
To illustrate the benefit of our approach, we now show how PALM can be applied to solve directly the more difficult constrained nonconvex and nonsmooth sparse nonnegative matrix factorization problem “as is”, and produces a simple convergent scheme.
First we note that the objective function \(d\left( A , XY\right) \!:=\! H\left( X , Y\right) \!=\! (1/2)\left\ {A \!\! XY} \right\ _{F}^{2}\) is a real polynomial function hence semialgebraic; moreover, both functions \(X \rightarrow H\left( X , Y\right) \) (for fixed \(Y\)) and \(Y \rightarrow H\left( X , Y\right) \) (for fixed \(X\)), are \(C^{1,1}\). Indeed we have
which are Lipschitz continuous with \(L_{1}(Y) \equiv \left\ {YY^{T}} \right\ _{F}\) and \(L_{2}(X) \equiv \left\ {X^{T}X} \right\ _{F}\) as Lipschitz modulis, respectively.
As a specific case, let us now consider the overall sparsity measure of a matrix defined by
which counts the number of nonzero elements in the matrix \(X\). Similarly \(R_{2}\left( Y\right) = \left\ {Y} \right\ _{0}\).
As shown in Example 3 (see the Appendix) both functions \(R_{1}\) and \(R_{2}\) are semialgebraic. Thanks to the properties of semialgebraic functions (see the Appendix) it follows that \(\varPsi _{c}\) is semialgebraic and PALM could be applied to produce a globally convergent algorithm. However, to apply PALM properly, we need to compute the proximal map of the nonconvex function \(\left\ {X} \right\ _{0}\) on \(X \ge 0\) for some given matrix \(U\). It turns out that this can be done effectively, as the next proposition shows. Our result makes use of the following operator (see, e.g., [26]).
Definition 4
Given any matrix \(U \in \mathbb R ^{m \times n}\), define the operator \(T_{s} : \mathbb R ^{m \times n} \rightrightarrows \mathbb R ^{m \times n}\) by
Observe that the operator \(T_{s}\) is in general multivalued. For a given matrix \(U\), it is actually easy to see that the elements of \(T_{s}\left( U\right) \) are obtained by choosing exactly \(s\) indices corresponding the \(s\) first largest entries (in absolute value) of \(U\) and by setting \(\left( T_{s}\left( U\right) \right) _{ij} = U_{ij}\) for such indices and \(\left( T_{s}\left( U\right) \right) _{ij} = 0\) otherwise. The multivaluedness of \(T_{s}\) comes from the fact that the \(s\) largest entries may not be uniquely defined.
Since computing \(T_{s}\) only requires determining the \(s\)th largest numbers of a matrix of \(mn\) numbers, this can be done in \(\mathcal O \left( mn\right) \) time [13] and zeroing out the proper entries in one more pass of the \(mn\) numbers.
We define the usual projection map onto \(\mathbb R _{+}^{m \times n}\) by
where the \(\max \) operation is taken componentwise.
Proposition 4
(Proximal map formula) Let \(U \in \mathbb R ^{m \times n}\) and let \(f: = \delta _{X \ge 0} + \delta _{\left\ {X} \right\ _{0} \le s}\). Then
where \(T_{s}\) is defined in Definition 4.
Proof
Given any matrix \(U \in \mathbb R ^{m \times n}\), let us introduce the following notations
where
and
Observe that the following relations hold
and
where the second relation follows from relation (i) and the fact that \(\left( P_{+}\left( U\right) \right) _{ij} = U_{ij}\) for any \((i , j) \in \mathcal{I}^{+}\) and \(\left( P_{+}\left( U\right) \right) _{ij} = 0\) for any \((i , j) \in \mathcal{I}^{}\).
From the above fact (i), we thus have that \(\bar{X} \in \hbox {prox}_{1}^{f}\left( U\right) \) if and only if
where the last equality follows from the fact that every solution of (4.2) is clearly a solution of (4.1), while the converse implication follows by a simple contradiction argument. Arguing in a similar way, one can see that the constraint \(X\ge 0\) in problem (4.2) can be removed without affecting the optimal solution of that problem. Thus, recalling the facts (ii) and (iii) we obtain
where the last equality is by the definition of \(T_{s}\) (see Definition 4). \(\square \)
With \(R_{1} := \delta _{X \ge 0} + \delta _{\left\ {X} \right\ _{0} \le \alpha }\) and \(R_{2} := \delta _{Y \ge 0} + \delta _{\left\ {Y} \right\ _{0} \le \beta }\), we now have all the ingredients to apply PALM and formulate explicitly a simple algorithm for the sparse nonnegative matrix factorization problem.
Remark 7

(i)
Observe that PALMSparse NMF requires that the Lipschitz modulis \(\left\ {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\ _{F}\) and \(\left\ {Y^{k}\left( Y^{k}\right) ^{T}} \right\ _{F}\) remain bounded away from zero. This means equivalently that we assume that
$$\begin{aligned} \inf _{k \in \mathbb N } \left\{ \left\ {X^{k}} \right\ _{F} , \left\ {Y^{k}} \right\ _{F} \right\} > 0. \end{aligned}$$In view of Remark 3(iii), we could avoid this assumption by introducing a safeguard \(\nu > 0\) and simply replacing the Lipschitz modulis in PALMSparse NMF by
$$\begin{aligned} \max \left( \nu , \left\ {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\ _{F}\right) \quad \text {and} \quad \max \left( \nu , \left\ {Y^{k}\left( Y^{k}\right) ^{T}} \right\ _{F}\right) . \end{aligned}$$ 
(ii)
Note that the easier nonnegative matrix factorization problem given in Example 1 is a particular instance of the sparse NMF and in that case both operators \(T_{\alpha }\) and \(T_{\beta }\) reduce to the identity operators. Hence, the computation in Step 2.1. for NMF reduces to
$$\begin{aligned} X^{k + 1} = P_{+}\left( U^{k}\right) \end{aligned}$$where \(U^{k}\) is given in (4.3) (similarly for \(Y^{k + 1}\)). Moreover, since in that case the constraints set \(\mathcal K _{m , r}\) and \(\mathcal K _{r , n}\) are closed and convex, it follows from Remark 4(iii) that we can set \(c_{k} = \left\ {Y^{k}\left( Y^{k}\right) ^{T}} \right\ _{F}\) and \(d_{k} =\left\ {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\ _{F}\) in that case.
The assumptions required to apply PALM are clearly satisfied and hence we can use Theorem 1 in order to obtain that the generated sequence is globally convergent to a critical point of the Sparse NMF problem (and similarly for NMF, as a special case). We record this in the following theorem.
Theorem 2
Let \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) be a sequence generated by PALMSparse NMF which is assumed to be bounded and to satisfy \(\inf _{k \in \mathbb N }\left\{ \left\ {X^{k}} \right\ _{F} , \left\ {Y^{k}} \right\ _{F} \right\} > 0\). Then,

(i)
The sequence \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) has finite length, that is
$$\begin{aligned} \sum _{k = 1}^{\infty } \left\ {X^{k + 1}  X^{k}} \right\ _{F} + \left\ {Y^{k + 1}  Y^{k}} \right\ _{F} < \infty . \end{aligned}$$ 
(ii)
The sequence \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) converges to a critical point \(\left( X^{*} ,Y^{*}\right) \) of the Sparse NMF.
Notes
For instance, it suffices to assume that \(\varPsi \) is coercive to obtain a bounded sequence via (i); see also Remark 6.
References
Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116, 5–16 (2009)
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality. Math. Oper. Res. 35, 438–457 (2010)
Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semialgebraic and tame problems: proximal algorithms, forwardbackward splitting, and regularized GaussSeidel methods. Math. Program. Ser. A 137, 91–129 (2013)
Auslender, A.: Méthodes numériques pour la décomposition et la minimisation de fonctions non différentiables. Numerische Mathematik 18, 213–223 (1971)
Auslender, A.: Optimisation—Méthodes numériques. Masson, Paris (1976)
Auslender, A.: Asymptotic properties of the Fenchel dual functional and applications to decomposition problems. J. Optim. Theory Appl. 73, 427–449 (1992)
Auslender, A., Teboulle, M., BenTiba, S.: Coupling the logarithmicquadratic proximal method and the block nonlinear GaussSeidel algorithm for linearly constrained convex minimization. In: Thera, M., Tichastschke, R. (eds.) Lecture Notes in Economics and Mathematical Systems, vol. 477. pp. 35–47 (1998)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci 2, 183–202 (2009)
Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. Preprint (2011)
Berry, M., Browne, M., Langville, A., Pauca, P., Plemmons, R.J.: Algorithms and applications for approximation nonnegative matrix factorization. Comput. Stat. Data Anal. 52, 155–173 (2007)
Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. PrenticeHall, New Jersey (1989)
Blum, M., Floyd, R.W., Pratt, V., Rivest, R., Tarjan, R.: Time bounds for selection. J. Comput. Syst. Sci. 7, 448–461 (1973)
Bolte, J., Combettes, P.L., Pesquet, J.C.: Alternating proximal algorithm for blind image recovery. In: Proceedings of the 17th IEEE International Conference on Image Processing,HongKong, ICIP, pp. 1673–1676 (2010)
Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Am. Math. Soc. 362, 3319–3363 (2010)
Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2006)
Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18, 556–572 (2007)
Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory MultiWay Data Analysis and Blind Source Separation. Wiley, New York (2009)
Grippo, L., Sciandrone, M.: On the convergence of the block nonlinear GaussSeidel method under convex constraints. Oper. Res. Lett. 26, 127–136 (2000)
Heiler, M., Schnorr, C.: Learning sparse representations by nonnegative matrix factorization and sequential cone programming. J. Mach. Learn. Res 7, 1385–1407 (2006)
Hoyer, P.O.: Nonnegative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 5, 1457–1469 (2004)
Kurdyka, K.: On gradients of functions definable in ominimal structures. Annales de l’institut Fourier 48, 769–783 (1998)
Lee, D.D., Seung, H.S.: Learning the part of objects from nonnegative matrix factorization. Nature 401, 788–791 (1999)
Lin, C.J.: Projected gradient methods for nonnegative matrix factorization. Neural Comput. 19, 2756–2779 (2007)
Łojasiewicz, S.: Une propriété topologique des sousensembles analytiques réels, Les Équations aux Dérivées Partielles. Éditions du centre National de la Recherche Scientifique, Paris, 8–89 (1963)
Luss, R., Teboulle, M.: Conditional gradient algorithms for rankone matrix approximations with a sparsity constraint. SIAM Rev. 55, 65–98 (2013)
Mordukhovich, B.: Variational Analysis and Generalized Differentiation. I. Basic Theory, Grundlehren der Mathematischen Wissenschaften, vol. 330. Springer, Berlin (2006)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, NewYork (1970)
Palomar, D.P., Eldar, Y. (eds.): Convex Optimization in Signal Processing and Communications. Cambridge University Press, UK (2010)
Powell, M.J.D.: On search directions for minimization algorithms. Math. Program. 4, 193–201 (1973)
Rockafellar, R.T., Wets, R.: Variational Analysis Grundlehren der Mathematischen Wissenschaften, vol. 317. Springer, Berlin (1998)
Sra, S., Nowozin, S., Wright, S.J. (eds.): Optimization for Machine Learning. The MIT Press, Cambridge (2011)
Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109, 475–494 (2001)
Zangwill, W.I.: Nonlinear Programming: A Unified Approach. Prentice Hall, Englewood Cliffs (1969)
Author information
Authors and Affiliations
Corresponding author
Additional information
Jérôme Bolte: This research benefited from the support of the FMJH Program Gaspard Monge in optimization and operation research (and from the support to this program from EDF) and it was cofunded by the European Union under the 7th Framework Programme “FP7PEOPLE2010ITN”, grant agreement number 264735SADCO. Shoham Sabach: Supported by a Tel Aviv University postdoctoral fellowship. Marc Teboulle: Partially supported by the Israel Science Foundation, ISF Grant 99812.
Appendix: KL results
Appendix: KL results
This appendix summarizes some important results on KL theory and gives some examples.
Definition 5
(Semialgebraic sets and functions)

(i)
A subset \(S\) of \(\mathbb R ^{d}\) is a real semialgebraic set if there exists a finite number of real polynomial functions \(g_{ij} , h_{ij} : \mathbb R ^{d} \rightarrow \mathbb R \) such that
$$\begin{aligned} S = \bigcup _{j = 1}^{p} \bigcap _{i = 1}^{q} \left\{ u \in \mathbb R ^{d} : \; g_{ij}\left( u\right) = 0 \, \text {and } \, h_{ij}\left( u\right) < 0 \right\} \!\,. \end{aligned}$$ 
(ii)
A function \(h : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) is called semialgebraic if its graph
$$\begin{aligned} \left\{ \left( u , t\right) \in \mathbb R ^{d + 1} : \; h\left( u\right) = t \right\} \end{aligned}$$is a semialgebraic subset of \(\mathbb R ^{d + 1}\).
The following result is a nonsmooth version of the Łojasiewicz gradient inequality, it can be found in [16, 17].
Theorem 3
Let \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \) be a proper and lower semicontinuous function. If \(\sigma \) is semialgebraic then it satisfies the KL property at any point of \({\hbox {dom}}\,{\sigma }\).
The class of semialgebraic sets is stable under the following operations: finite unions, finite intersections, complementation and Cartesian products.
Example 2
(Examples of semialgebraic sets and functions) There is broad class of functions arising in optimization.

Real polynomial functions.

Indicator functions of semialgebraic sets.

Finite sums and product of semialgebraic functions.

Composition of semialgebraic functions.

Sup/Inf type function, e.g., \(\sup \left\{ g\left( u , v\right) : \; v \in C \right\} \) is semialgebraic when \(g\) is a semialgebraic function and \(C\) a semialgebraic set.

In matrix theory, all the following are semialgebraic sets: cone of PSD matrices, Stiefel manifolds and constant rank matrices.

The function \(x \rightarrow \hbox {dist}\left( x , S\right) ^{2}\) is semialgebraic whenever \(S\) is a nonempty semialgebraic subset of \(\mathbb R ^{d}\).
Remark 8
The above results can be proven directly or via the fundamental TarskiSeidenberg principle: The image of a semialgebraic set \(A \subset \mathbb R ^{d + 1}\) by the projection \(\pi : \mathbb R ^{d + 1} \rightarrow \mathbb R ^{d}\) is semialgebraic.
All these results and properties can be found in [1–3].
Let us now give some examples of semialgebraic functions and other notions related to KL functions and their minimization through PALM.
Example 3
(\(\left\ {\cdot } \right\ _{0}\) is semialgebraic) The sparsity measure (or the counting norm) of a vector \(x\) of \(\mathbb R ^d\) is defined by
For any given subset \(I \subset \left\{ 1 , \ldots , d \right\} \), we denote by \(\left I\right \) its cardinal and we define
The graph of \(\left\ {\cdot } \right\ _{0}\) is given by a finite union of product sets:
it is thus a piecewise linear set, and in particular a semialgebraic set. Therefore \(\left\ {\cdot } \right\ _{0}\) is semialgebraic. As a consequence the merit functions appearing in the various sparse NMF formulations we studied in Sect. 4 are semialgebraic, hence KL.
Example 4
(\(\left\ {\cdot } \right\ _{p}\) and KL functions) Being given \(p > 0\) the \(p\) norm is defined through
Let us establish that \(\left\ {\cdot } \right\ _{p}\) is semialgebraic whenever \(p\) is rational, i.e., \(p = \frac{p_{1}}{p_{2}}\) where \(p_{1}\) and \(p_{2}\) are positive integers. From a general result concerning the composition of semialgebraic functions we see that it suffices to establish that the function \(s > 0 \rightarrow s^{\frac{p_{1}}{p_{2}}}\) is semialgebraic. Its graph in \(\mathbb R ^{2}\) can be written as
This last set is semialgebraic by definition.
When \(p\) is irrational \(\left\ {\cdot } \right\ ^{p}\) is not semialgebraic, however for any semialgebraic and lower semicontinuous functions \(H,\,f\) and any nonnegative real numbers \(\alpha \) and \(\lambda \) the functions
are KL functions (see, e.g., [2] and references therein) with \(\varphi \) of the form \(\varphi \left( s\right) = cs^{1  \theta }\) where \(c\) is positive and \(\theta \) belongs to \(\left( 0 , 1\right] \).
1.1 Convex functions and KL property
Our developments on the convergence of PALM and its rate of convergence seem to be new even in the convex case. It is thus very important to realize that most convex functions encountered in finite dimensional applications satisfy the KL property. This may be due to the fact that they are semialgebraic or subanalytic, but it can also come from more involved reasons involving ominimal structures (see [2] for further details) or more downtoearth properties like various growth conditions (see below). The reader which is wondering what a non KL convex function looks like can consult [15]. The convex counterexample provided in this work exhibit a wildly oscillatory collection of level sets, a phenomenon which seems highly unlikely to happen with functions modeling real world problems.
An interesting and rather specific feature of convex functions is that their desingularizing function \(\varphi \) can be explicitly computed from rather common and simple properties. Here are two important examples taken from Attouch et al. [2].
Example 5
(Growth condition for convex functions) Consider a proper, convex and lower semicontinuous function \(\sigma : \mathbb R ^{d} \rightarrow \left( \infty , +\infty \right] \). Assume that \(\sigma \) satisfies the following growth condition: There exist a neighborhood \(U\) of \(\bar{x},\,\eta > 0, c > 0\) and \(r \ge 1\) such that
where \(\bar{x} \in \mathrm{argmin }\, \sigma \ne \emptyset \). Then \(\sigma \) satisfies the KL property at the point \(\bar{x}\) for \(\varphi \left( s\right) = r\; c^{\frac{1}{r}} \; s^{\frac{1}{r}}\) on the set \(U \cap \left[ \min \sigma < \sigma < \min \sigma + \eta \right] \) (see, for more details, [15, 16]).
Example 6
(Uniform convexity) Assume now that \(\sigma \) is uniformly convex i.e., satisfies
for all \(x , y \in \mathbb R ^{d}\) and \(u \in \partial \sigma \left( x\right) \) (when \(p = 2\) the function is called strongly convex). Then \(\sigma \) satisfies the Kurdyka–Łojasiewicz property on \({\hbox {dom}}\,{\sigma }\) with \(\varphi \left( s\right) = pc^{\frac{1}{p}}s^{\frac{1}{p}}\).
About this article
Cite this article
Bolte, J., Sabach, S. & Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146, 459–494 (2014). https://doi.org/10.1007/s1010701307019
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1010701307019
Keywords
 Alternating minimization
 Block coordinate descent
 GaussSeidel method
 Kurdyka–Łojasiewicz property
 Nonconvexnonsmooth minimization
 Proximal forwardbackward
 Sparse nonnegative matrix factorization