1 Introduction

Minimizing the sum of a finite collections of given functions has been at the heart of mathematical optimization research. Indeed, such an abstract model is a convenient vehicle which includes most practical models arising in a wide range of applications, whereby each function can be used to describe a specific required property of the problem at hand, either as an objective or as a constraint or both. Such a structure, while very general, still often allows one to beneficially exploit mathematical properties of the specific functions involved to devise simple and efficient algorithms. Needless to say that the literature in optimization research and its applications covering such a model is huge, and the present paper is not intended to review it. For some pioneering and early works that realized the potential of the sum optimization model, see for instance, Auslender [4], and Bertsekas and Tsitsiklis [12], with references therein.

Recently there has been a revived interest in the design and analysis of algorithms for solving optimization problems involving sum of functions, in particular is signal/image processing and machine learning. The main trend is solving very large scale problems, exploiting special structures/properties of the problem data toward the design of very simple scheme (e.g., matrix/vector multiplications), yet capable of producing reasonable approximate solutions efficiently. In order to achieve these goals, the focus of this recent research has been with a particular emphasis on the development and analysis of algorithms for convex models which either describe a particular application at hand or is used as a relaxation for tackling an original nonconvex model. We refer the reader to the two very recent edited volumes [29] and [32] for a wealth of relevant and interesting works covering a broad spectrum of theory and applications which reflects this intense research activity.

In this work, we completely depart from the convex setting. Indeed, in many of the alluded applications, the original optimization model is often genuinely nonconvex and nonsmooth. This can be seen in a wide array of problems such as: compressed sensing, matrix factorization, dictionary learning, sparse approximations of signals and images, and blind decomposition, to mention just a few. We thus consider a broad class of nonconvex-nonsmooth problems of the form

$$\begin{aligned} (M) \quad \hbox {minimize}_{x , y} \varPsi \left( x , y\right) := f\left( x\right) + g\left( y\right) + H\left( x , y\right) \end{aligned}$$

where the functions \(f\) and \(g\) are extended valued (i.e., allowing the inclusion of constraints) and \(H\) is a smooth function (see more precise definitions in the next section). We stress that throughout this paper, no convexity whatsoever will be assumed in the objective or/and the constraints. Moreover, we note that the choice of two block of variables is for the sake of simplicity of exposition. Indeed, all the results derived in this paper hold true for a finite number of block-variables, see Sect. 3.6.

This model is rich enough to cover many of the applications mentioned above, and was recently studied in the work of Attouch et al. [2] which also provides the motivation of the present work. The standard approach to solve Problem \((M)\) is via the so-called Gauss-Seidel iteration scheme, popularized in modern era under the name alternating minimization. That is, starting with some given initial point \(\left( x^{0} , y^{0}\right) \), we generate a sequence \(\left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\) via the scheme

$$\begin{aligned} x^{k + 1}&\in \mathrm{argmin }_{x} \varPsi \left( x , y^{k}\right) \\ y^{k + 1}&\in \mathrm{argmin }_{y} \varPsi \left( x^{k + 1} , y\right) . \end{aligned}$$

Convergence results for the Gauss-Seidel method, also known as coordinate descent method, can be found in several studies, see e.g., [4, 12, 28, 33]. One of the key assumptions necessary to prove convergence is that the minimum in each step is uniquely attained, see e.g., [34]. Otherwise, as shown in Powell [30], the method may cycle indefinitely without converging. In the convex setting, for a continuously differentiable function \(\varPsi \), assuming strict convexity of one argument while the other is fixed, every limit point of the sequence \(\left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\) generated by this method minimizes \(\varPsi \), see e.g., [12]. Very recently, in [10], global rate of convergence results have been derived for block coordinate gradient projection algorithm for convex and smooth constrained minimization problems.

Removing the strict convexity assumption can be achieved by coupling the method with a proximal term, that is to consider the proximal regularization of the Gauss-Seidel scheme:

$$\begin{aligned} x^{k + 1}&\in \mathrm{argmin }_{x} \left\{ \varPsi (x , y^{k}) + \frac{c_k}{2}\left\| {x - x^{k}} \right\| ^{2} \right\} \end{aligned}$$
(1.1)
$$\begin{aligned} y^{k + 1}&\in \mathrm{argmin }_{y} \left\{ \varPsi (x^{k + 1} , y) + \frac{d_k}{2}\left\| {y - y^{k}} \right\| ^{2} \right\} \!, \end{aligned}$$
(1.2)

where \(c_{k}\) and \(d_{k}\) are positive real numbers. In fact, such an idea was already suggested by Auslender in [6]. It was further studied in [7] with a nonquadratic proximal term to handle linearly constrained convex problems, and further results can be found in [19]. In all these works, only convergence of the subsequences can be established. In the nonconvex and nonsmooth setting, which is the focus of this paper, the situation becomes much harder, see e.g., [33].

The present work is motivated by two very recent papers by Attouch et al. [2, 3], which appear to be the first works in the general nonconvex and nonsmooth setting, establishing in [2] convergence of the sequences generated by the proximal Gauss-Seidel scheme (see 1.1, 1.2), while in [3], a similar result was proven for the well-known proximal-forward–backward (PFB) algorithm applied to the nonconvex and nonsmooth minimization of the sum of a nonsmooth function with a smooth one (i.e., Problem \((M)\) with no \(y\)). Their approach relies on assuming that the objective function \(\varPsi \) to be minimized satisfies the so-called Kurdyka–Łojasiewicz (KL) property [22, 25], which was developed for nonsmooth functions by Bolte et al. [16, 17] (see Sect. 2.4).

In both of these works, the suggested approach gains its strength from the fact that the class of functions satisfying the KL property is considerably large, and cover a wealth of nonconvex-nonsmooth functions arising in many fundamental applications, see more in the forthcoming Sect. 3 and in the Appendix.

Clearly, the scheme (1.1) and (1.2) always produce a nonincreasing sequence of function values, i.e., for all \(k \ge 0\) we have

$$\begin{aligned} \varPsi (x^{k + 1} , y^{k + 1}) \le \varPsi (x^{k}, y^{k}) \end{aligned}$$

and the sequence \(\{ \varPsi (x^{k} , y^{k}) \}_{k \in \mathbb N }\) is bounded from below by \(\inf \varPsi \). Thus, with \(\inf \varPsi > -\infty \), the sequence \(\{ \varPsi (x^{k} , y^{k}) \}_{k \in \mathbb N }\) converges to some real number, and as proven in [2], assuming that the objective function \(\varPsi \) satisfies the KL property, every bounded sequence generated by the proximal regularized Gauss-Seidel scheme (1.1) and (1.2) converges to a critical point of \(\varPsi \). These are nice properties for the alluded scheme above. However, this scheme is conceptual, and not really a “true” algorithm, in the sense that it suffers from (at least) two main drawbacks. First, each step requires exact minimization of a nonconvex and nonsmooth problem. Secondly, it is a nested scheme which implies two nontrivial issues: (i) accumulations of computational errors in each step, and (ii) how and when to stop each step before passing to the next.

The above drawbacks motivates a very simple and naive approach, which can be traced back to Auslender [5] for smooth unconstrained minimization. Thus, for the more general Problem \((M)\), for each block of coordinate perform one gradient step on the smooth part, while a proximal step is taken on the nonsmooth part. This idea contrasts with the entirely implicit step required by the proximal version of the Gauss-Seidel method (1.1) and (1.2), that is here, we consider an approximation of this scheme via the well-known and standard proximal linearization of each subproblem. This yields the Proximal Alternating Linearized Minimization (PALM) algorithm, whose exact description is given in Sect. 3.1. Thus, the root of our method can be viewed as nothing else but an alternating minimization approach for the so-called Proximal Forward–Backward (PFB) algorithm. Let us mention that the PFB algorithm has been extensively studied and successfully applied in many contexts in the convex setting, see e.g., the recent monograph of Bauschke and Combettes [8] for a wealth of fundamental results and references therein.

Now, we briefly streamline the novelty of our approach and our contributions. First, the coupling of the Gauss-Seidel proximal scheme with PFB does not seem to have been analyzed in the literature within such a general nonconvex and nonsmooth setting proposed here. It allows to eliminate the difficulties evoked above with the scheme (1.1) and (1.2) and leads to a simple and tractable algorithm PALM, with global convergence results for nonconvex and nonsmooth semi-algebraic problems.

Secondly, while a part of the convergence result we develop in this article falls in the scope of a general convergence mechanism introduced and described in [3], we present here a self-contained thorough proof that avoids the use of these abstract results. The motivation stems from the fact that we target applications for which KL property holds at each point of the underlying space. Functions having this property are called KL functions. A very wide class of KL functions is provided by tame functions; these include in particular nonsmooth semi-algebraic and real subanalytic functions (see, e.g., [2] and references therein). This property allows, through a “uniformization” result inspired by Attouch and Bolte [1] (see Lemma 6) to considerably simplify the main arguments of the convergence analysis and avoid involved induction reasoning.

A third consequence of our approach is to provide a step-by-step analysis of our algorithm which singles out, at each stage of the convergence proof, the essential tools that are needed to get to the next stage. This allows one to understand the main ingredients at play and to evidence the exact role of KL property in the analysis of algorithms in the nonconvex and nonsmooth setting, see more details in Sect. 3.2, where we outline a sort of “recipe” for proving global convergence results that could be of benefit to analyze many other optimization algorithms.

A fourth implication is that our block coordinate approach allows to get rid of a restrictive assumption inherent to the proximal forward–backward algorithm and which is often overlooked: the gradient of the smooth part \(H\) has to be globally Lipschitz continuous. This requirement often reduces the potential of applying PFB in concrete applications. On the contrary, our approach provides a flexibility that allows to deal with more general problems (e.g., componentwise quadratic forms) or with some ill-conditioned quadratic problems. Indeed, the stepsizes in PALM may be adjusted componentwise in order to fit as much as possible the structure of the problem at hand, see Sect. 4 for an interesting application. Another by-product of this work is that it can also be applied to the convex version of Problem \((M)\) for which convergence results are quite limited. Indeed, even for convex problems our convergence results are new (see the Appendix). Finally, to illustrate our results, we present a simple algorithm proven to converge to a critical point for a broad class of nonconvex and nonsmooth nonnegative matrix factorization problems, which to the best of our knowledge appears to be the first globally convergent algorithm for this important class of problems.

Outline of the paper. The paper is organized as follows. In the next section we define the problem, make precise our setting, and we collect a few preliminary basic facts on nonsmooth analysis, on proximal maps for nonconvex functions and we introduce the KL property. In Sect. 3 we state the algorithm PALM, derive some elementary properties and then develop a systematic approach to establish our main convergence results (see Sect. 3.2). In particular we clearly specify when and where the KL property is playing a fundamental role in the overall convergence analysis. Section 4 illustrates our results on a broad class of nonconvex and nonsmooth matrix factorization problems. Finally, to make this paper self-contained, we include an appendix which summarizes some well-known and relevant results on the KL property including some useful examples of KL functions. Throughout the paper, our notations are quite standard and can be found, for example, in [31].

2 The problem and some preliminaries

2.1 The problem and basic assumptions

We are interested in solving the nonconvex and nonsmooth minimization problem

$$\begin{aligned} (M) \quad \text{ minimize } \varPsi \left( x , y\right) := f\left( x\right) + g\left( y\right) + H\left( x , y\right) \text{ over } \text{ all } \left( x , y\right) \in \mathbb R ^{n} \times \mathbb R ^{m}. \end{aligned}$$

Following [2], we take the following as our blanket assumption.

Assumption 1

  1. (i)

    \(f : \mathbb R ^{n} \rightarrow \left( -\infty , +\infty \right] \) and \(g : \mathbb R ^{m} \rightarrow \left( -\infty , +\infty \right] \) are proper and lower semicontinuous functions.

  2. (ii)

    \(H : \mathbb R ^{n} \times \mathbb R ^{m} \rightarrow \mathbb R \) is a \(C^{1}\) function.

2.2 Subdifferentials of nonconvex and nonsmooth functions

Let us recall few definitions concerning subdifferential calculus (see, for instance, [27, 31]). Recall that for \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) a proper and lower semicontinuous function, the domain of \(\sigma \) is defined through

$$\begin{aligned} {\hbox {dom}}\,{\sigma } := \left\{ x \in \mathbb R ^{d} : \; \sigma \left( x\right) < +\infty \right\} . \end{aligned}$$

Definition 1

(Subdifferentials) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) be a proper and lower semicontinuous function.

  1. (i)

    For a given \(x \in {\hbox {dom}}\,{\sigma }\), the Fréchet subdifferential of \(\sigma \) at \(x\), written \(\widehat{\partial } \sigma (x)\), is the set of all vectors \(u \in \mathbb R ^{d}\) which satisfy

    $$\begin{aligned} \liminf _{y \ne x \text { } y \rightarrow x} \frac{\sigma (y) - \sigma (x) - \left\langle {u , y - x} \right\rangle }{\left\| {y - x} \right\| } \ge 0. \end{aligned}$$

    When \(x \notin {\hbox {dom}}\,{\sigma }\), we set \(\widehat{\partial } \sigma \left( x\right) = \emptyset \).

  2. (ii)

    The limiting-subdifferential [27], or simply the subdifferential, of \(\sigma \) at \({x \in \mathbb R ^n}\), written \(\partial \sigma \left( x\right) \), is defined through the following closure process

    $$\begin{aligned} \partial \sigma \left( x\right) \!:=\! \left\{ u \!\in \! \mathbb R ^{d} : \exists x^{k} \!\rightarrow \! x, \sigma (x^{k}) \!\rightarrow \! \sigma (x) \; \text{ and } \; u^{k} \in \widehat{\partial } \sigma (x^{k}) \rightarrow u \; \text {as} \; k \rightarrow \infty \right\} \!\,. \end{aligned}$$

Remark 1

(i) We have \(\widehat{\partial } \sigma \left( x\right) \subset \partial \sigma (x)\) for each \(x \in \mathbb R ^{d}\). In the previous inclusion, the first set is closed and convex while the second one is closed (see [31, Theorem 8.6, page 302]).

  1. (ii)

    Let \(\{ (x^{k} , u^{k}) \}_{k \in \mathbb N }\) be a sequence in \(\hbox {graph}\,{\left( \partial \sigma \right) }\) that converges to \(\left( x , u\right) \) as \(k \rightarrow \infty \). By the very definition of \(\partial \sigma \left( x\right) \), if \(\sigma (x^{k})\) converges to \(\sigma (x)\) as \(k \rightarrow \infty \), then \(\left( x , u\right) \in \hbox {graph}\,{(\partial \sigma )}\).

  2. (iii)

    In this nonsmooth context, the well-known Fermat’s rule remains barely unchanged. It formulates as: “if \(x \in \mathbb R ^{d}\) is a local minimizer of \(\sigma \) then \( 0 \in \partial \sigma \left( x\right) \)”.

  3. (iv)

    Points whose subdifferential contains \(0\) are called (limiting-)critical points.

  4. (v)

    The set of critical points of \(\sigma \) is denoted by \(\hbox {crit}\,{\sigma }\).

Definition 2

(Sublevel sets) Being given real numbers \(\alpha \) and \(\beta \) we set

$$\begin{aligned} \left[ \alpha \le \sigma \le \beta \right] := \left\{ x \in \mathbb R ^{d} : \alpha \le \sigma \left( x\right) \le \beta \right\} \!\,. \end{aligned}$$

We define similarly \(\left[ \alpha < \sigma < \beta \right] \). The level sets of \(\sigma \) are simply denoted by

$$\begin{aligned} \left[ \sigma = \alpha \right] := \left\{ x \in \mathbb R ^{d} : \sigma \left( x\right) = \alpha \right\} \!\,. \end{aligned}$$

Let us recall a useful result related to our structured Problem \((M)\), see e.g., [31].

Proposition 1

(Subdifferentiability property) Assume that the coupling function \(H\) in Problem \((M)\) is continuously differentiable. Then for all \(\left( x , y\right) \in \mathbb R ^{n} \times \mathbb R ^{m}\) we have

$$\begin{aligned} \partial \varPsi \left( x , y\right)&= \left( \nabla _{x} H\left( x , y\right) + \partial f\left( x\right) , \nabla _{y} H\left( x , y\right) + \partial g\left( y\right) \right) \nonumber \\&= \left( \partial _{x} \varPsi \left( x , y\right) , \partial _{y} \varPsi \left( x , y\right) \right) \!. \end{aligned}$$
(2.1)

Remark 2

Recall that for any set \(S\), both \(S + \emptyset \) and \(S \times \emptyset \) are empty sets, so that the above formula makes sense over the whole product space \(\mathbb R ^{n} \times \mathbb R ^{m}\).

2.3 Proximal map for nonconvex functions

We need to recall the fundamental Moreau proximal map for a nonconvex function (see [31, page 20]). It is at the heart of the PALM algorithm.

Let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , \infty \right] \) be a proper and lower semicontinuous function. Given \(x \in \mathbb R ^{d}\) and \(t > 0\), the proximal map associated to \(\sigma \) and its corresponding Moreau proximal envelope are defined respectively by:

$$\begin{aligned} \hbox {prox}_{t}^{\sigma }\left( x\right) := \mathrm{argmin }\, \left\{ \sigma \left( u\right) + \frac{t}{2}\left\| {u - x} \right\| ^{2} : \; u \in \mathbb R ^{d} \right\} \end{aligned}$$
(2.2)

and

$$\begin{aligned} m^{\sigma }\left( x , t\right) := \inf \left\{ \sigma \left( u\right) + \frac{1}{2t}\left\| {u - x} \right\| ^{2} : \; u \in \mathbb R ^{d} \right\} \!\,. \end{aligned}$$

Proposition 2

(Well-definedness of proximal maps) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , \infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} \sigma > -\infty \). Then, for every \(t \in \left( 0 , \infty \right) \) the set \(\hbox {prox}_{\frac{1}{t}}^{\sigma }\left( x\right) \) is nonempty and compact, in addition \(m^{\sigma }\left( x , t\right) \) is finite and continuous in \(\left( x , t\right) \).

Note that here \(\hbox {prox}_{t}^{\sigma }\) is a set-valued map. When \(\sigma := \delta _{X}\), the indicator function of a nonempty and closed set \(X\), i.e., for the function \(\delta _{X} : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) defined, for all \(x \in \mathbb R ^{d}\), by

$$\begin{aligned} \delta _{X}\left( x\right) = \left\{ \begin{array}{l@{\quad }l} 0, &{} \hbox { if } x \in X, \\ +\infty , &{} \hbox { otherwise}, \end{array}\right. \end{aligned}$$

the proximal map reduces to the projection operator onto \(X\), defined by

$$\begin{aligned} P_{X}\left( v\right) := \mathrm{argmin }\left\{ \left\| {u - v} \right\| : \; u \in X \right\} \!. \end{aligned}$$
(2.3)

The projection \(P_{X} : \mathbb R ^{d} \rightrightarrows \mathbb R ^{d}\) has nonempty values and defines in general a multi-valued map, as opposed to the convex case where orthogonal projections are guaranteed to be single-valued.

2.4 The Kurdyka–Łojasiewicz property

The Kurdyka–Łojasiewicz property plays a central role in our analysis. Below, we recall the essential elements. We begin with the following extension of Łojasiewicz gradient inequality [25] as introduced in [2] for nonsmooth functions. First, we introduce some notation. For any subset \(S \subset \mathbb R ^{d}\) and any point \(x \in \mathbb R ^{d}\), the distance from \(x\) to \(S\) is defined and denoted by

$$\begin{aligned} \hbox {dist}\left( x , S\right) := \inf \left\{ \left\| {y - x} \right\| : \; y \in S \right\} \!\,. \end{aligned}$$

When \(S = \emptyset \), we have that \(\hbox {dist}\left( x , S\right) = \infty \) for all \(x\).

Let \(\eta \in \left( 0 , +\infty \right] \). We denote by \(\Phi _{\eta }\) the class of all concave and continuous functions \(\varphi : \left[ 0 , \eta \right) \rightarrow \mathbb R _{+}\) which satisfy the following conditions

  1. (i)

    \(\varphi \left( 0\right) = 0\);

  2. (ii)

    \(\varphi \) is \(C^{1}\) on \(\left( 0 , \eta \right) \) and continuous at \(0\);

  3. (iii)

    for all \(s \in \left( 0 , \eta \right) \): \(\varphi ^{\prime }\left( s\right) > 0\).

Now we define the Kurdyka–Łojasiewicz (KL) property.

Definition 3

(Kurdyka–Łojasiewicz property) Let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) be proper and lower semicontinuous.

  1. (i)

    The function \(\sigma \) is said to have the Kurdyka–Łojasiewicz (KL) property at \(\overline{u} \in {\hbox {dom}}\,{\partial \sigma } := \left\{ u \in \mathbb R ^{d} : \partial \sigma \left( u\right) \ne \emptyset \right\} \) if there exist \(\eta \in \left( 0 , +\infty \right] \), a neighborhood \(U\) of \(\overline{u}\) and a function \(\varphi \in \varPhi _{\eta }\), such that for all

    $$\begin{aligned} u \in U \cap \left[ \sigma \left( \overline{u}\right) < \sigma \left( u\right) < \sigma \left( \overline{u}\right) + \eta \right] \!\,, \end{aligned}$$

    the following inequality holds

    $$\begin{aligned} \varphi ^{\prime }\left( \sigma \left( u\right) - \sigma \left( \overline{u}\right) \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) \ge 1. \end{aligned}$$
    (2.4)
  2. (ii)

    If \(\sigma \) satisfy the KL property at each point of \({\hbox {dom}}\,{\partial \sigma }\) then \(\sigma \) is called a KL function.

It is easy to establish that KL property holds in the neighborhood of noncritical points (see, e.g., [2]), thus the truly relevant aspect of this property is when \(\bar{u}\) is critical, i.e., when \(0 \in \partial \sigma \left( \bar{u}\right) \). In that case it warrants that \(\sigma \) is sharp up to a reparameterization of its values: “\(\sigma \) is amenable to sharpness”. Indeed inequality (2.4) can be proved to imply

$$\begin{aligned} \hbox {dist}\left( 0 , \partial \left( \varphi \circ \left( \sigma \left( u\right) - \sigma \left( \overline{u}\right) \right) \right) \right) \ge 1 \end{aligned}$$

for all convenient \(u\) (simply use the “one-sided” chain-rule [31, Theorem 10.6]). This means that the subgradients of the function \(u \rightarrow \varphi \circ \left( \sigma \left( u\right) - \sigma \left( \bar{u}\right) \right) \) have a norm greater than \(1\), no matter how close is the point \(u\) to the critical point \(\bar{u}\) (provided that \(\sigma \left( u\right) > \sigma \left( \bar{u}\right) \)). This property is called sharpness while the reparameterization function \(\varphi \) is called a desingularizing function of \(\sigma \) at \(\overline{u}\). As it is described further into detail, this geometrical feature has dramatic consequences in the study of first-order descent methods (see also [3]).

A remarkable aspect of KL functions is that they are ubiquitous in applications, for example, semi-algebraic, subanalytic and log-exp are KL functions (see [13] and references therein). These facts originates in the pioneering and fundamental works of Łojasiewicz [25] and Kurdyka [22]; works which were recently extended to nonsmooth functions in [16, 17]. In the Appendix we recall a nonsmooth semi-algebraic version of KL property, Theorem 3, which covers many problems arising in optimization and which plays a central role in the convergence analysis of our algorithm for the Nonnegative Matrix Factorization problem. For the reader’s convenience, other related facts and pertinent results are also summarized in the same appendix.

3 PALM algorithm and convergence analysis

3.1 The algorithm PALM

As outlined in the Introduction, PALM can be viewed as alternating the steps of the PFB scheme. It is well-known that the proximal forward–backward scheme for minimizing the sum of a smooth function \(h\) with a nonsmooth one \(\sigma \) can simply be viewed as the proximal regularization of \(h\) linearized at a given point \(x^{k}\), i.e.,

$$\begin{aligned} x^{k + 1} \!\in \! \mathrm{argmin }_{x \in \mathbb R ^{d}} \left\{ \left\langle {x \!-\! x^{k} , \nabla h\left( x^{k}\right) } \right\rangle \!+\! \frac{t}{2}\left\| {x \!-\! x^{k}} \right\| ^{2} \!+\! \sigma \left( x\right) \right\} \!, \left( t \!>\! 0\right) \!, \end{aligned}$$
(3.1)

that is, using the proximal map notation defined in (2.2), we get

$$\begin{aligned} x^{k + 1} \in \hbox {prox}_{t}^{\sigma }\left( x^{k} - \frac{1}{t}\nabla h\left( x^{k}\right) \right) \!. \end{aligned}$$
(3.2)

Adopting this scheme on Problem \((M)\) we thus replace \(\varPsi \) in the iterations (1.1) and (1.2) (cf. the Introduction) by their approximations which are obtained through the proximal linearization of each subproblems, i.e., \(\varPsi \) is replaced by

$$\begin{aligned} \widehat{\varPsi }\left( x , y^{k}\right) = \left\langle {x - x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\| {x - x^{k}} \right\| ^{2} + f\left( x\right) \!, \quad \left( c_{k} > 0\right) \!\,, \end{aligned}$$

and

$$\begin{aligned} \widehat{\widehat{\varPsi }}\left( x^{k + 1} , y\right) = \left\langle {y - y^{k} , \nabla _{y} H\left( x^{k + 1} , y^{k}\right) } \right\rangle + \frac{d_{k}}{2}\left\| {y - y^{k}} \right\| ^{2} + g\left( y\right) \!, \quad \left( d_{k} > 0\right) \!\,. \end{aligned}$$

Thus alternating minimization on the two blocks \(\left( x , y\right) \) yields the basis of the algorithm PALM we propose here.

figure a

PALM needs minimal assumptions to be analyzed.

Assumption 2

  1. (i)

    \(\inf _\mathbb{R ^{n} \times \mathbb R ^{m}} \varPsi > -\infty ,\,\inf _\mathbb{R ^{n}} f > -\infty \) and \(\inf _\mathbb{R ^{m}} g > -\infty \).

  2. (ii)

    For any fixed \(y\) the function \(x \rightarrow H\left( x , y\right) \) is \(C^{1,1}_{L_{1}(y)}\), namely the partial gradient \(\nabla _{x} H\left( x , y\right) \) is globally Lipschitz with moduli \(L_{1}\left( y\right) \), that is

    $$\begin{aligned} \left\| {\nabla _{x} H\left( x_{1} , y\right) - \nabla _{x} H\left( x_{2} , y\right) } \right\| \le L_{1}\left( y\right) \left\| {x_{1} - x_{2}} \right\| , \quad \forall \,x_{1} , x_{2} \in \mathbb R ^{n}. \end{aligned}$$

    Likewise, for any fixed \(x\) the function \(y \rightarrow H\left( x , y\right) \) is assumed to be \(C^{1,1}_{L_{2}(x)}\).

  3. (iii)

    For \(i = 1 , 2\) there exists \(\lambda _{i}^{-} , \lambda _{i}^{+} > 0\) such that

    $$\begin{aligned} \inf \{ L_{1}(y^{k}) : k \in \mathbb N \}&\ge \lambda _{1}^{-} \quad \text {and} \quad \inf \{ L_{2}(x^{k}) : k \in \mathbb N \} \ge \lambda _{2}^{-} \end{aligned}$$
    (3.5)
    $$\begin{aligned} \sup \{ L_{1}(y^{k}) : k \in \mathbb N \}&\le \lambda _{1}^{+} \quad \text {and} \quad \sup \{ L_{2}(x^{k}) : k \in \mathbb N \} \le \lambda _{2}^{+}. \end{aligned}$$
    (3.6)
  4. (iv)

    \(\nabla H\) is Lipschitz continuous on bounded subsets of \(\mathbb R ^{n} \times \mathbb R ^{m}\). In other words, for each bounded subsets \(B_{1} \times B_{2}\) of \(\mathbb R ^{n} \times \mathbb R ^{m}\) there exists \(M > 0\) such that for all \(\left( x_{i} , y_{i}\right) \in B_{1} \times B_{2},\,i = 1 , 2\):

    $$\begin{aligned}&\left\| {\left( \nabla _{x} H\left( x_{1} , y_{1}\right) - \nabla _{x} H\left( x_{2} , y_{2}\right) , \nabla _{y} H\left( x_{1} , y_{1}\right) - \nabla _{y} H\left( x_{2} , y_{2}\right) \right) } \right\| \nonumber \\&\quad \le M \left\| {\left( x_{1} - x_{2} , y_{1} - y_{2}\right) } \right\| \!. \end{aligned}$$
    (3.7)

A few words on Assumption 2 are now in order.

Remark 3

  1. (i)

    Assumption 2(i) ensures that Problem \((M)\) is inf-bounded. It also warrants that the algorithm PALM is well defined through the proximal maps formulas (3.3) and (3.4) (see Proposition 2).

  2. (ii)

    The partial Lipschitz properties required in Assumption 2(ii) are at the heart of PALM which is designed to fully exploit the block-Lipschitz property of the problem at hand.

  3. (iii)

    The inequalities (3.5) in Assumption 2(iii) guarantees that the proximal steps in PALM remain always well-defined. As we describe below these two properties are not demanding at all. Indeed, consider a function \(H\) whose gradient is Lipschitz continuous block-wise as in Assumption 2(ii). Take now two arbitrary positive constants \(\mu _{1}^{-}\) and \(\mu _{2}^{-}\), replace the Lipschitz modulis \(L_{1}(y)\) and \(L_{2}(x)\) by \(L_{1}^{\prime }(y) = \max \left\{ L_{1}(y) , \mu _{1}^{-} \right\} \) and \(L_{2}^{\prime }(x) = \max \left\{ L_{2}(x) , \mu _{2}^{-} \right\} \), respectively. The functions \(L_{1}^{\prime }(y)\) and \(L_{2}^{\prime }(x)\) are still some Lipschitz moduli of \(\nabla _{x} H\left( \cdot , y\right) \) and \(\nabla _{y} H\left( x , \cdot \right) \), respectively. Moreover

    $$\begin{aligned} \inf \left\{ L_{1}^{\prime }(y) : y \in \mathbb R ^{m} \right\} \ge \mu _{1}^{-} \quad \text {and} \quad \inf \left\{ L_{2}^{\prime }(x) : x \in \mathbb R ^{n} \right\} \ge \mu _{2}^{-}. \end{aligned}$$

    Thus the inequalities (3.5) are trivially fulfilled with these new Lipschitz modulis and with \(\lambda _{i}^{-} = \mu _{i}^{-}\) (\(i = 1 , 2\)).

  4. (iv)

    Assumption 2(iv) is satisfied whenever \(H\) is \(C^{2}\) as a direct consequence of the Mean Value Theorem. Similarly, the inequalities (3.6) in Assumption 2(iii), can be obtained by assuming that \(H\) is \(C^{2}\) and that the generated sequence \(\{ (x^{k} , y^{k}) \}_{k \in \mathbb N }\) is bounded.

Before deriving the convergence results for PALM, in the next subsection we outline our proof methodology.

3.2 An informal general proof recipe

Fix a positive integer \(N\). Let \(\varPsi : \mathbb R ^{N} \rightarrow \left( -\infty , +\infty \right] \) be a proper and lower semicontinuous function which is bounded from below and consider the problem

$$\begin{aligned} (P) \quad \inf \left\{ \varPsi \left( z\right) : \; z \in \mathbb R ^{N} \right\} \!\,. \end{aligned}$$

Suppose we are given a generic algorithm \(\mathcal A \) which generates a sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) via the following:

$$\begin{aligned} z^{0} \in \mathbb R ^{N}, z^{k+1} \in \mathcal A \left( z^{k}\right) , \quad k = 0 , 1 , \ldots . \end{aligned}$$

The objective is to prove that the whole sequence generated by the algorithm \(\mathcal{A }\) converges to a critical point of \(\varPsi \).

In the light of [1, 3], we outline a general methodology which describes the main steps to achieve this goal. In particular we put in evidence how and when the KL property is entering in action. Basically, the methodology consists of three main steps.

  1. (i)

    Sufficient decrease property: Find a positive constant \(\rho _{1}\) such that

    $$\begin{aligned} \rho _{1}\left\| {z^{k + 1} - z^{k}} \right\| ^{2} \le \varPsi (z^{k}) - \varPsi (z^{k + 1}), \quad \forall \,k = 0 , 1 , \ldots . \end{aligned}$$
  2. (ii)

    A subgradient lower bound for the iterates gap: Assume that the sequence generated by the algorithm \(\mathcal{A }\) is bounded.Footnote 1 Find another positive constant \(\rho _{2}\), such that

    $$\begin{aligned} \left\| {w^{k+1}} \right\| \le \rho _{2}\left\| {z^{k+1} - z^{k }} \right\| \!, \quad w^{k} \in \partial \varPsi \left( z^{k}\right) \!, \quad \forall \,k = 0 , 1 , \ldots . \end{aligned}$$

These first two requirements above are quite standard and shared by essentially most descent algorithms, see e.g., [2]. Note that when properties (i) and (ii) hold, then for any algorithm \(\mathcal A \) one can show that the set of accumulations points is a nonempty, compact and connected set (see Lemma 5 (iii) for the case of PALM). One then need to prove that it is a subset of the critical points of \(\varPsi \) on which \(\varPsi \) is constant.

Apart from the aspects concerning the structure of the limiting set (nonempty, compact and connected), these first two steps depend on the structure of the specific chosen algorithm \(\mathcal A \). Therefore the constants \(\rho _{1}\) and \(\rho _{2}\) are fit to the current given algorithm. The third step, needed to complete our goal, namely to establish global convergence to a critical point of \(\varPsi \), doesn’t depend at all on the structure of the specific chosen algorithm \(\mathcal A \).

Rather, it requires an additional assumption on the class of functions \(\varPsi \) to be minimized. It is here that the KL property enters in action: relying on the descent property of the algorithm, and on a uniformization of the KL property (see Lemma 6) below, the third and last step amounts to:

  1. (iii)

    Using the KL property: Assume that \(\varPsi \) is a KL function and show that the generated sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence.

This basic approach can in principle be applied to any algorithm and is now systematically developed for PALM.

3.3 Basic convergence properties

We first establish some basic properties of PALM under our Assumptions 1 and 2. We begin by recalling the well-known and important descent lemma for smooth functions, see e.g., [12, 28].

Lemma 1

(Descent lemma) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)-Lipschitz continuous. Then,

$$\begin{aligned} h\left( u\right) \le h\left( v\right) + \left\langle {u - v , \nabla h\left( v\right) } \right\rangle +\frac{L_{h}}{2}\left\| {u - v} \right\| ^{2}\!, \quad \forall \,\; u , v \in \mathbb R ^{d}. \end{aligned}$$
(3.8)

The main computational step of PALM involves a proximal map step of a proper and lower semicontinuous but nonconvex function. The next result shows that the well-known key inequality for the proximal-gradient step in the convex setting (see, e.g., [9]) can be easily extended to the nonconvex setting to warrant sufficient decrease of the objective function after a proximal map step.

Lemma 2

(Sufficient decrease property) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)-Lipschitz continuous and let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} \sigma > -\infty \). Fix any \(t > L_{h}\). Then, for any \(u \in {\hbox {dom}}\,{\sigma }\) and any \(u^{+} \in \mathbb R ^{d}\) defined by

$$\begin{aligned} u^{+} \in \hbox {prox}_{t}^{\sigma }\left( u - \frac{1}{t}\nabla h\left( u\right) \right) \end{aligned}$$
(3.9)

we have

$$\begin{aligned} h\left( u^{+}\right) + \sigma \left( u^{+}\right) \le h\left( u\right) + \sigma \left( u\right) - \frac{1}{2}\left( t - L_{h}\right) \left\| {u^{+} - u} \right\| ^{2}. \end{aligned}$$
(3.10)

Proof

First, it follows immediately from Proposition 2 that \(u^{+}\) is well-defined. By the definition of the proximal map given in (2.2) we get

$$\begin{aligned} u^{+} \in \mathrm{argmin }_{v \in \mathbb R ^{d}} \left\{ \left\langle {v - u , \nabla h\left( u\right) } \right\rangle + \frac{t}{2}\left\| {v - u} \right\| ^{2} + \sigma \left( v\right) \right\} \!\,, \end{aligned}$$

and hence in particular, taking \(v = u\), we obtain

$$\begin{aligned} \left\langle {u^{+} - u , \nabla h\left( u\right) } \right\rangle + \frac{t}{2}\left\| {u^{+} - u} \right\| ^{2} + \sigma \left( u^{+}\right) \le \sigma \left( u\right) \!. \end{aligned}$$
(3.11)

Invoking first the descent lemma (see Lemma 1) for \(h\), and using then inequality (3.11), we get

$$\begin{aligned} h\left( u^{+}\right) + \sigma \left( u^{+}\right)&\le h\left( u\right) + \left\langle {u^{+} - u , \nabla h\left( u\right) } \right\rangle + \frac{L_{h}}{2}\left\| {u^{+} - u} \right\| ^{2} + \sigma \left( u^{+}\right) \\&\le h\left( u\right) + \frac{L_{h}}{2}\left\| {u^{+} - u} \right\| ^{2} + \sigma \left( u\right) - \frac{t}{2}\left\| {u^{+} - u} \right\| ^{2} \\&= h\left( u\right) + \sigma \left( u\right) - \frac{1}{2}\left( t - L_{h}\right) \left\| {u^{+} - u} \right\| ^{2}\!\,. \end{aligned}$$

This proves that (3.10) holds. \(\square \)

Remark 4

(i) The above result is valid for any \(t > 0\). The condition \(t > L_{h}\) ensures a sufficient decrease in the value of \(h(u^{+}) + \sigma (u^{+})\).

  1. (ii)

    If the function \(\sigma \) is taken as the indicator function \(\delta _{X}\) of a nonempty, closed and nonconvex subset \(X\), then the proximal map reduces to the projection \(P_{X}\), that is

    $$\begin{aligned} u^{+} \in P_{X}\left( u - \frac{1}{t}\nabla h\left( u\right) \right) \end{aligned}$$

    and we recover the sufficient decrease property of the Projected Gradient Method (PGM) in the nonconvex case.

  2. (iii)

    In the case when \(\sigma \) is a convex, proper and lower semicontinuous function, we can take \(t = L_{h}\) (and even \(t > \frac{L_h}{2}\)). Indeed, in that case, we can apply the global optimality condition characterizing \(u^{+}\) defined in (3.9) to get instead of (3.11) the stronger inequality

    $$\begin{aligned} \sigma \left( u^{+}\right) + \left\langle {u^{+} - u , \nabla h\left( u\right) } \right\rangle \le \sigma \left( u\right) - t\left\| {u^{+} - u} \right\| ^{2} \end{aligned}$$
    (3.12)

    which together with the descent lemma (see Lemma 1) yields

    $$\begin{aligned} h\left( u^{+}\right) + \sigma \left( u^{+}\right) \le h\left( u\right) + \sigma \left( u\right) - \left( t - \frac{L_{h}}{2}\right) \left\| {u^{+} - u} \right\| ^{2}\,. \end{aligned}$$
  3. (iv)

    In view of item (iii), when applying PALM with convex functions \(f\) and \(g\), the constants \(c_{k}\) and \(d_{k},\,k \in \mathbb N \), can simply be taken as \(L_{1}(y^{k})\) and \(L_{2}(x^{k + 1})\), respectively.

Equipped with this result, we can now establish some useful properties for PALM under our Assumptions 1 and 2. In the sequel for convenience we often use the notation

$$\begin{aligned} z^{k} := (x^{k} , y^{k}), \quad \forall \,k \ge 0. \end{aligned}$$

Lemma 3

(Convergence properties) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM. The following assertions hold.

  1. (i)

    The sequence \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is nonincreasing and in particular

    $$\begin{aligned} \frac{\rho _{1}}{2}\left\| {z^{k + 1} - z^{k}} \right\| ^{2} \le \varPsi (z^{k}) - \varPsi (z^{k + 1}),\quad \forall \,k\ge 0, \end{aligned}$$
    (3.13)

    where

    $$\begin{aligned} \rho _{1} = \min \left\{ \left( \gamma _{1} - 1\right) \lambda _{1}^{-} ,\left( \gamma _{2} - 1\right) \lambda _{2}^{-} \right\} . \end{aligned}$$
  2. (ii)

    We have

    $$\begin{aligned} \sum _{k = 1}^{\infty } \left\| {x^{k + 1} - x^{k}} \right\| ^{2} + \left\| {y^{k + 1} - y^{k}} \right\| ^2 = \sum _{k = 1}^{\infty } \left\| {z^{k + 1} - z^{k}} \right\| ^{2} < \infty ,\quad \end{aligned}$$
    (3.14)

    and hence \(\lim _{k \rightarrow \infty } \left\| {z^{k + 1} - z^{k}} \right\| = 0\).

Proof

  1. (i)

    Fix \(k \ge 0\). Under our Assumption 2(ii), the functions \(x \rightarrow H\left( x , y\right) \) (\(y\) is fixed) and \(y \rightarrow H\left( x , y\right) \) (\(x\) is fixed) are differentiable and have a Lipschitz gradient with modulis \(L_{1}\left( y\right) \) and \(L_{2}\left( x\right) \), respectively. Using the iterative steps (3.3) and (3.4), applying Lemma 2 twice, first with \(h\left( \cdot \right) := H\left( \cdot , y^{k}\right) ,\,\sigma := f\) and \(t := c_{k} > L_{1}(y^{k})\), and secondly with \(h\left( \cdot \right) := H\left( x^{k + 1} , \cdot \right) ,\,\sigma := g\) and \(t := d_{k} > L_{2}(x^{k + 1})\), we obtain successively

    $$\begin{aligned} H\left( x^{k +1} , y^{k}\right) + f\left( x^{k + 1}\right)&\le H\left( x^{k} , y^{k}\right) + f\left( x^{k}\right) \\&\quad - \frac{1}{2}\left( c_{k} - L_{1}\left( y^{k}\right) \right) \left\| {x^{k + 1} - x^{k}} \right\| ^{2} \\&= H\left( x^{k} , y^{k}\right) + f\left( x^{k}\right) \\&\quad - \frac{1}{2}\left( \gamma _{1} - 1\right) L_{1}\left( y^{k}\right) \left\| {x^{k + 1} - x^{k}} \right\| ^{2}\!\,, \end{aligned}$$

    and

    $$\begin{aligned} H\left( x^{k + 1} , y^{k + 1}\right) + g\left( y^{k + 1}\right)&\le H\left( x^{k +1} , y^{k}\right) + g\left( y^{k}\right) \\&\quad - \frac{1}{2}\left( d_{k} - L_{2}\left( x^{k + 1}\right) \right) \left\| {y^{k + 1} - y^{k}} \right\| ^{2} \\&= H\left( x^{k +1} , y^{k}\right) + g\left( y^{k}\right) \\&\quad - \frac{1}{2}\left( \gamma _{2} - 1\right) L_{2}\left( x^{k + 1}\right) \left\| {y^{k + 1} - y^{k}} \right\| ^{2}\!\,. \end{aligned}$$

    Adding the above two inequalities, we thus obtain for all \(k \ge 0\),

    $$\begin{aligned} \varPsi \left( z^{k}\right) \!-\! \varPsi \left( z^{k + 1}\right) \!&= \! H\left( x^{k} , y^{k}\right) \!+\! f\left( x^{k}\right) \!+\! g\left( y^{k}\right) \!-\! H\left( x^{k + 1} , y^{k + 1}\right) \nonumber \\&- f\left( x^{k + 1}\right) \!-\! g\left( y^{k + 1}\right) \nonumber \\&\ge \frac{1}{2}\left( \gamma _{1} - 1\right) L_{1}\left( y^{k}\right) \left\| {x^{k + 1} - x^{k}} \right\| ^{2}\nonumber \\&\quad + \frac{1}{2}\left( \gamma _{2} - 1\right) L_{2}\left( x^{k + 1}\right) \left\| {y^{k + 1} - y^{k}} \right\| ^{2}\!. \end{aligned}$$
    (3.15)

    From (3.15) it follows that the sequence \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is nonincreasing, and since \(\varPsi \) is assumed to be bounded from below (see Assumption 2(i)), it converges to some real number \(\underline{\varPsi }\). Moreover, using the facts that \(L_{1}(y^{k}) \ge \lambda _{1}^{-} > 0\) and \(L_{2}(x^{k + 1}) \ge \lambda _{2}^{-} > 0\) (see Assumption 2(iii)), we get for all \(k \ge 0\):

    $$\begin{aligned}&\frac{1}{2}\left( \gamma _{1} - 1\right) L_{1}\left( y^{k}\right) \left\| {x^{k + 1} - x^{k}} \right\| ^{2}+ \frac{1}{2}\left( \gamma _{2} - 1\right) L_{2}\left( x^{k + 1}\right) \left\| {y^{k + 1} - y^{k}} \right\| ^{2} \nonumber \\&\quad \ge \frac{1}{2}\left( \gamma _{1} - 1\right) \lambda _{1}^{-}\left\| {x^{k + 1} - x^{k}} \right\| ^{2}+ \frac{1}{2}\left( \gamma _{2} - 1\right) \lambda _{2}^{-}\left\| {y^{k + 1} - y^{k}} \right\| ^{2} \nonumber \\&\quad \ge \frac{\rho _{1}}{2}\left\| {x^{k + 1} - x^{k}} \right\| ^{2} + \frac{\rho _{1}}{2}\left\| {y^{k + 1} - y^{k}} \right\| ^{2}. \end{aligned}$$
    (3.16)

    Combining (3.15) and (3.16) yields the following

    $$\begin{aligned} \frac{\rho _{1}}{2}\left\| {z^{k + 1} - z^{k}} \right\| ^{2} \le \varPsi (z^{k}) - \varPsi (z^{k + 1}), \end{aligned}$$
    (3.17)

    and assertion (i) is proved.

  2. (ii)

    Let \(N\) be a positive integer. Summing (3.17) from \(k = 0\) to \(N - 1\) we also get

    $$\begin{aligned} \sum _{k = 0}^{N - 1} \left\| {x^{k + 1} - x^{k}} \right\| ^{2} + \left\| {y^{k + 1} - y^{k}} \right\| ^{2}&= \sum _{k = 0}^{N - 1} \left\| {z^{k + 1} - z^{k}} \right\| ^{2}\\&\le \frac{2}{\rho _{1}}(\varPsi (z^{0}) - \varPsi (z^{N})) \\&\le \frac{2}{\rho _{1}}(\varPsi (z^{0}) - \underline{\varPsi }\,). \end{aligned}$$

    Taking the limit as \(N \rightarrow \infty \), we obtain the desired assertion (ii).\(\square \)

3.4 Approaching the set of critical points

In order to generate sequences approaching the set of critical points, we first prove the following result.

Lemma 4

(A subgradient lower bound for the iterates gap) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. For each positive integer \(k\), define

$$\begin{aligned} A_{x}^{k} := c_{k - 1}\left( x^{k - 1} - x^{k}\right) + \nabla _{x} H\left( x^{k} , y^{k}\right) - \nabla _{x} H\left( x^{k - 1} , y^{k - 1}\right) \end{aligned}$$
(3.18)

and

$$\begin{aligned} A_{y}^{k} := d_{k - 1}\left( y^{k - 1} - y^{k}\right) + \nabla _{y} H\left( x^{k} , y^{k}\right) - \nabla _{y} H\left( x^{k} , y^{k - 1}\right) \!. \end{aligned}$$
(3.19)

Then \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \) and there exists \(M > 0\) such that

$$\begin{aligned} \left\| {\left( A_{x}^{k} , A_{y}^{k}\right) } \right\| \le \left\| {A_{x}^{k}} \right\| + \left\| {A_{y}^{k}} \right\| \le \left( 2M + 3\rho _{2}\right) \left\| {z^{k} - z^{k - 1}} \right\| \!,\quad \forall \,k\ge 1, \end{aligned}$$
(3.20)

where

$$\begin{aligned} \rho _{2} = \max \left\{ \gamma _{1}\lambda _{1}^{+} , \gamma _{2}\lambda _{2}^{+} \right\} . \end{aligned}$$

Proof

Let \(k\) be a positive integer. From the definition of the proximal map (2.2) and the iterative step (3.3) we have

$$\begin{aligned} x^{k} \in \mathrm{argmin }_{x \in \mathbb R ^{n}} \left\{ \left\langle {x - x^{k - 1} , \nabla _{x} H\left( x^{k - 1} , y^{k - 1}\right) } \right\rangle + \frac{c_{k - 1}}{2}\left\| {x - x^{k - 1}} \right\| ^{2} + f\left( x\right) \right\} \!\,. \end{aligned}$$

Writing down the optimality condition yields

$$\begin{aligned} \nabla _{x} H\left( x^{k - 1} , y^{k - 1}\right) + c_{k - 1}\left( x^{k} - x^{k - 1}\right) + u^{k} = 0 \end{aligned}$$

where \(u^{k} \in \partial f\left( x^{k}\right) \). Hence

$$\begin{aligned} \nabla _{x} H\left( x^{k - 1} , y^{k - 1}\right) + u^{k} = c_{k - 1}\left( x^{k - 1} - x^{k}\right) . \end{aligned}$$
(3.21)

Similarly from the iterative step (3.4) we have

$$\begin{aligned} y^{k} \in \mathrm{argmin }_{y \in \mathbb R ^{m}} \left\{ \left\langle {y - y^{k - 1} , \nabla _{y} H\left( x^{k} , y^{k - 1}\right) } \right\rangle + \frac{d_{k - 1}}{2}\left\| {y - y^{k - 1}} \right\| ^{2} + g\left( y\right) \right\} . \end{aligned}$$

Again, writing down the optimality condition yields

$$\begin{aligned} \nabla _{y} H\left( x^{k} , y^{k - 1}\right) + d_{k - 1}\left( y^{k} - y^{k - 1}\right) + v^{k} = 0 \end{aligned}$$

where \(v^{k} \in \partial g\left( y^{k}\right) \). Hence

$$\begin{aligned} \nabla _{y} H\left( x^{k} , y^{k - 1}\right) + v^{k} = d_{k - 1}\left( y^{k - 1} - y^{k}\right) . \end{aligned}$$
(3.22)

It is clear, from Proposition 1, that

$$\begin{aligned} \nabla _{x} H\left( x^{k} , y^{k}\right) + u^{k} \in \partial _{x} \varPsi \left( x^{k} , y^{k}\right) \quad \text {and} \quad \nabla _{y} H\left( x^{k} , y^{k}\right) + v^{k} \in \partial _{y} \varPsi \left( x^{k} , y^{k}\right) . \end{aligned}$$

From all these facts we obtain that \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \).

We now have to estimate the norms of \(A_{x}^{k}\) and \(A_{y}^{k}\). Since \(\nabla H\) is Lipschitz continuous on bounded subsets of \(\mathbb R ^{n} \times \mathbb R ^{m}\) (see Assumption 2(iv)) and since we assumed that \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is bounded, there exists \(M > 0\) such that

$$\begin{aligned} \left\| {A_{x}^{k}} \right\|&\le c_{k - 1}\left\| {x^{k - 1} - x^{k}} \right\| + \left\| {\nabla _{x} H\left( x^{k} , y^{k}\right) - \nabla _{x} H\left( x^{k - 1} , y^{k - 1}\right) } \right\| \\&\le c_{k - 1}\left\| {x^{k} - x^{k - 1}} \right\| + M\left( \left\| {x^{k} - x^{k - 1}} \right\| + \left\| {y^{k} - y^{k - 1}} \right\| \right) \\&= \left( M + c_{k - 1}\right) \left\| {x^{k} - x^{k - 1}} \right\| + M\left\| {y^{k} - y^{k - 1}} \right\| . \end{aligned}$$

The moduli \(L_{1}\left( y^{k - 1}\right) \) being bounded from above by \(\lambda _{1}^{+}\) (see Assumption 2(iii)), we get that \(c_{k - 1} \le \gamma _{1}\lambda _{1}^{+}\) and thence

$$\begin{aligned} \left\| {A_{x}^{k}} \right\|&\le \left( M + \gamma _{1}\lambda _{1}^{+}\right) \left\| {x^{k} - x^{k - 1}} \right\| + M\left\| {y^{k} - y^{k - 1}} \right\| \nonumber \\&\le \left( 2M + \gamma _{1}\lambda _{1}^{+}\right) \left\| {z^{k} - z^{k - 1}} \right\| \nonumber \\&\le \left( 2M + \rho _{2}\right) \left\| {z^{k} - z^{k - 1}} \right\| . \end{aligned}$$
(3.23)

On the other hand, from the Lipschitz continuity of \(\nabla _{y}H\left( x , \cdot \right) \) (see Assumption 2(ii)), we have that

$$\begin{aligned} \left\| {A_{y}^{k}} \right\|&\le d_{k - 1}\left\| {y^{k} - y^{k - 1}} \right\| + \left\| {\nabla _{y} H\left( x^{k} , y^{k}\right) - \nabla _{y} H\left( x^{k} , y^{k - 1}\right) } \right\| \\&\le d_{k - 1}\left\| {y^{k} - y^{k - 1}} \right\| + d_{k - 1}\left\| {y^{k} - y^{k - 1}} \right\| \\&= 2d_{k - 1}\left\| {y^{k} - y^{k - 1}} \right\| . \end{aligned}$$

Since \(L_{2}\left( x^{k}\right) \) is bounded from above by \(\lambda _{2}^{+}\) (see Assumption 2(iii)) we get that \(d_{k - 1} \le \gamma _{2}\lambda _{2}^{+}\) and thence

$$\begin{aligned} \left\| {A_{y}^{k}} \right\| \le 2\gamma _{2}\lambda _{2}^{+}\left\| {y^{k} - y^{k - 1}} \right\| \le 2\gamma _{2}\lambda _{2}^{+}\left\| {z^{k} - z^{k - 1}} \right\| \le 2\rho _{2}\left\| {z^{k} - z^{k - 1}} \right\| .\nonumber \\ \end{aligned}$$
(3.24)

Summing up these estimations, we get the desired result in (3.20), that is,

$$\begin{aligned} \left\| {\left( A_{x}^{k} , A_{y}^{k}\right) } \right\| \le \left\| {A_{x}^{k}} \right\| + \left\| {A_{y}^{k}} \right\| \le \left( 2M + 3\rho _{2}\right) \left\| {z^{k} - z^{k - 1}} \right\| . \end{aligned}$$

This completes the proof. \(\square \)

In the following result, we summarize several properties of the limit point set. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM from a starting point \(z^{0}\). The set of all limit points is denoted by \(\omega \left( z^{0}\right) \), i.e.,

$$\begin{aligned}&\omega (z^{0}) = \left\{ \overline{z} \in \mathbb R ^{n} \times \mathbb R ^{m} : \; \exists \text{ an } \text{ increasing } \text{ sequence } \text{ of } \text{ integers } \left\{ {k}_{{l}}\right\} _{{l} \in \mathbb N }\!, \right. \\&\quad \left. \text{ such } \text{ that } \; z^{k_{l}} \rightarrow \overline{z} \text{ as } l \rightarrow \infty \right\} \!. \end{aligned}$$

Lemma 5

(Properties of the limit point set \(\omega \left( z^{0}\right) \)) Suppose that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. The following assertions hold.

  1. (i)

    \(\emptyset \ne \omega \left( z^{0}\right) \subset \hbox {crit}\,{\varPsi }\)

  2. (ii)

    We have

    $$\begin{aligned} \lim _{k \rightarrow \infty } \hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) = 0. \end{aligned}$$
    (3.25)
  3. (iii)

    \(\omega \left( z^{0}\right) \) is a nonempty, compact and connected set.

  4. (iv)

    The objective function \(\varPsi \) is finite and constant on \(\omega \left( z^{0}\right) \).

Proof

  1. (i)

    Let \(z^{*} = \left( x^{*} , y^{*}\right) \) be a limit point of \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N } = \left\{ \left( x^{k} , y^{k}\right) \right\} _{k \in \mathbb N }\). This means that there is a subsequence \(\left\{ \left( x^{k_{q}} , y^{k_{q}}\right) \right\} _{q \in \mathbb N }\) such that \(\left( x^{k_{q}} , y^{k_{q}}\right) \rightarrow \left( x^{*} , y^{*}\right) \) as \(q \rightarrow \infty \). Since \(f\) and \(g\) are lower semicontinuous (see Assumption 1(i)), we obtain that

    $$\begin{aligned} \liminf _{{q} \rightarrow {\infty }} f\left( x^{k_{q}}\right) \ge f\left( x^{*}\right) \quad \text {and} \quad \liminf _{{q} \rightarrow {\infty }} g\left( y^{k_{q}}\right) \ge g\left( y^{*}\right) . \end{aligned}$$
    (3.26)

    From the iterative step (3.3), we have for all integer \(k\)

    $$\begin{aligned} x^{k + 1} \in \mathrm{argmin }_{x \in \mathbb R ^{n}} \left\{ \left\langle {x - x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\| {x - x^{k}} \right\| ^{2} + f\left( x\right) \right\} \!. \end{aligned}$$

    Thus letting \(x = x^{*}\) in the above, we get

    $$\begin{aligned}&\left\langle {x^{k + 1} - x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\| {x^{k + 1} - x^{k}} \right\| ^{2} + f\left( x^{k + 1}\right) \\&\quad \le \left\langle {x^{*} - x^{k} , \nabla _{x} H\left( x^{k} , y^{k}\right) } \right\rangle + \frac{c_{k}}{2}\left\| {x^{*} - x^{k}} \right\| ^{2}+ f\left( x^{*}\right) \!. \end{aligned}$$

    Choosing \(k = k_{q} - 1\) in the above inequality and letting \(q\) goes to \(\infty \), we obtain

    $$\begin{aligned} \limsup _{q \rightarrow \infty } f\left( x^{k_{q}}\right)&\le \limsup _{q \rightarrow \infty } \Bigg (\left\langle {x^{*} - x^{k_{q} - 1} , \nabla _{x} H\left( x^{k_{q} - 1} , y^{k_{q} - 1}\right) } \right\rangle \nonumber \\&+ \frac{c_{k}}{2}\left\| {x^{*} - x^{k_{q} - 1}} \right\| ^{2}\Bigg ) + f\left( x^{*}\right) \!, \end{aligned}$$
    (3.27)

    where we have used the facts that both sequences \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) and \(\left\{ {c}_{{k}}\right\} _{{k} \in \mathbb N }\) are bounded, \(\nabla H\) continuous and that the distance between two successive iterates tends to zero (see Lemma 3(ii)). For that very reason we also have \(x^{k_{q} - 1} \rightarrow x^{*}\) as \(q \rightarrow \infty \), hence (3.27) reduces to \( \limsup _{q \rightarrow \infty } f\left( x^{k_{q}}\right) \le f\left( x^{*}\right) \). Thus, in view of (3.26), \(f\left( x^{k_{q}}\right) \) tends to \(f\left( x^{*}\right) \) as \(q \rightarrow \infty \). Arguing similarly with \(g\) and \(y^{k}\) we thus finally obtain

    $$\begin{aligned} \lim _{q\rightarrow \infty } \varPsi \left( x^{k_{q}} , y^{k_{q}}\right)&= \lim _{q\rightarrow \infty } \left\{ H\left( x^{k_{q}} , y^{k_{q}}\right) + f\left( x^{k_{q}}\right) + g\left( y^{k_{q}}\right) \right\} \\&= H\left( x^{*} , y^{*}\right) + f\left( x^{*}\right) + g\left( y^{*}\right) \\&= \varPsi \left( x^{*} , y^{*}\right) \!. \end{aligned}$$

    On the other hand we know from Lemmas 3(ii) and 4 that \(\left( A_{x}^{k} , A_{y}^{k}\right) \in \partial \varPsi \left( x^{k} , y^{k}\right) \) and \(\left( A_{x}^{k} , A_{y}^{k}\right) \rightarrow \left( 0 , 0\right) \) as \(k \rightarrow \infty \). The closedness property of \(\partial \varPsi \) (see Remark 1(ii)) implies thus that \(\left( 0 , 0\right) \in \partial \varPsi \left( x^{*} , y^{*}\right) \). This proves that \(\left( x^{*} , y^{*}\right) \) is a critical point of \(\varPsi \).

  2. (ii)

    This item follows as an elementary consequence of the definition of limit points.

  3. (iii)

    Set \(\omega = \omega \left( z^{0}\right) \). Observe that \(\omega \) can be viewed as an intersection of compact sets

    $$\begin{aligned} \omega = \bigcap _{q \in \mathbb N } \, \overline{\bigcup _{k \ge q} \left\{ z^{k} \right\} }\!, \end{aligned}$$

    so it is also compact. Towards a contradiction, we assume that \(\omega \) is not connected. Whence there exist two nonempty and closed disjoint subsets \(A\) and \(B\) of \(\omega \) such that \(\omega = A \cup B\). Consider the function \(\gamma : \mathbb R ^{n} \times \mathbb R ^{m} \rightarrow \mathbb R \) defined by

    $$\begin{aligned} \gamma \left( z\right) = \frac{{\hbox {dist}}\left( z , A\right) }{{\hbox {dist}}\left( z , A\right) + {\hbox {dist}}\left( z , B\right) } \end{aligned}$$

    for all \(z \in \mathbb R ^{n} \times \mathbb R ^{m}\). Due to the closedness properties of \(A\) and \(B\), the function \(\gamma \) is well defined, it is also continuous. Note that \(A = \gamma ^{-1}\left( \left\{ 0 \right\} \right) = \left[ \gamma = 0\right] \) and \(B = \gamma ^{-1}\left( \left\{ 1 \right\} \right) = \left[ \gamma = 1\right] \). Setting \(U = \left[ \gamma < 1/4\right] \) and \(V = \left[ \gamma > 3/4\right] \), we obtain, respectively, two open neighborhoods of the compact sets \(A\) and \(B\). There exists an integer \(k_{0}\) such that \(z^{k}\) either belongs to \(U\) or to \(V\) for all \(k \ge k_{0}\). Supposing the contrary, there would exists a subsequence \(\left\{ z^{k_{q}} \right\} _{q \in \mathbb N }\) evolving in the complement of the open set \(U \cup V\). This would imply the existence of a limit point \(z^{*}\) of \(z^{k}\) in \(\mathbb R ^{n}{\setminus }\left( U \cup V\right) \) which is impossible. Put \(r_{k} = \gamma \left( z^{k}\right) \) for each integer \(k\). The sequence \(\left\{ {r}_{{k}}\right\} _{{k} \in \mathbb N }\) satisfies:

    1. 1.

      \(r_{k} \notin \left[ 1/4 , 3/4\right] \) for all \(k \ge k_0\).

    2. 2.

      There exist infinitely many \(k\) such that \(r_{k} < 1/4\).

    3. 3.

      There exist infinitely many \(k\) such that \(r_{k} > 3/4\).

    4. 4.

      The difference \(\left| r_{k + 1} - r_{k}\right| \) tends to \(0\) as \(k\) goes to infinity.

    The last point follows from the fact that \(\gamma \) is uniformly continuous on bounded sets together with the assumption that \(\left\| {z^{k + 1} - z^{k}} \right\| \rightarrow 0\). Clearly there exist no sequence complying with the above requirements. The set \(\omega \) is therefore connected.

  4. (iv)

    Denote by \(l\) the finite limit of \(\varPsi \left( z^{k}\right) \) as \(k\) goes to infinity. Take \(z^{*}\) in \(\omega \left( z^{0}\right) \). There exists a subsequence \(z^{k_{q}}\) converging to \(z^{*}\) as \(q\) goes to infinity. On one hand the sequence \(\left\{ \varPsi \left( z^{k_{q}}\right) \right\} _{q \in \mathbb N }\) converges to \(l\) and on the other hand (as we proved in assertion (i)) we have \(\varPsi \left( z^{*}\right) = l\). Hence the restriction of \(\varPsi \) to \(\omega \left( z^{0}\right) \) equals \(l\).

\(\square \)

Remark 5

Note that properties (ii) and (iii) in Lemma 5 are generic for any sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) satisfying \(\left\| {z_{k + 1} - z_{k}} \right\| \rightarrow 0\) as \(k\) goes to infinity.

Our objective is now to prove that the sequence which is generated by PALM converges to a critical point of Problem \((M)\). For that purpose we consider now that the objective of Problem \((M)\) is a KL function, which is the case for example if \(f , g\) and \(H\) are semi-algebraic (see the Appendix for more details).

3.5 Convergence of PALM to critical points of problem \((M)\)

Before proving our main theorem the following result, which was established in [1, Lemma 1] for the Łojasiewicz property, would be adjusted within the more general KL property as follows.

Lemma 6

(Uniformized KL property) Let \(\varOmega \) be a compact set and let \(\sigma : \mathbb R ^{d} \rightarrow \left( -\infty , \infty \right] \) be a proper and lower semicontinuous function. Assume that \(\sigma \) is constant on \(\varOmega \) and satisfies the KL property at each point of \(\varOmega \). Then, there exist \(\varepsilon > 0,\,\eta > 0\) and \(\varphi \in \Phi _{\eta }\) such that for all \(\overline{u}\) in \(\varOmega \) and all \(u\) in the following intersection:

$$\begin{aligned} \left\{ u \in \mathbb R ^{d} : \; \hbox {dist}\left( u , \varOmega \right) < \varepsilon \right\} \cap \left[ \sigma \left( \overline{u}\right) < \sigma \left( u\right) < \sigma \left( \overline{u}\right) + \eta \right] \end{aligned}$$
(3.28)

one has,

$$\begin{aligned} \varphi ^{\prime }\left( \sigma \left( u\right) - \sigma \left( \overline{u}\right) \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) \ge 1. \end{aligned}$$
(3.29)

Proof

Denote by \(\mu \) the value of \(\sigma \) over \(\varOmega \). The compact set \(\varOmega \) can be covered by a finite number of open balls \(B\left( u_{i} , \varepsilon _{i}\right) \) (with \(u_{i} \in \varOmega \) for \(i = 1 , \ldots , p\)) on which the KL property holds. For each \(i = 1 , \ldots , p\), we denote the corresponding desingularizing function by \(\varphi _{i} : \left[ 0 , \eta _{i}\right) \rightarrow \mathbb R _{+}\) with \(\eta _{i} > 0\). For each \(u \in B\left( u_{i} , \varepsilon _{i}\right) \cap \left[ \mu < \sigma < \mu + \eta _{i}\right] \), we thus have

$$\begin{aligned} \varphi _{i}^{\prime }\left( \sigma \left( u\right) - \sigma \left( u_{i}\right) \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) = \varphi _{i}^{\prime }\left( \sigma \left( u\right) - \mu \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) \ge 1.\nonumber \\ \end{aligned}$$
(3.30)

Choose \(\varepsilon > 0\) sufficiently small so that

$$\begin{aligned} U_{\varepsilon } := \left\{ x \in \mathbb R ^{d} : \hbox {dist}\left( x , \varOmega \right) \le \varepsilon \right\} \subset \bigcup _{i = 1}^{p} B\left( u_{i} , \varepsilon _{i}\right) \!. \end{aligned}$$
(3.31)

Set \(\eta = \min \left\{ \eta _{i} : \; i = 1 , \ldots , p \right\} >0\) and

$$\begin{aligned} \varphi \left( s\right) = \sum _{i = 1}^{p} \varphi _{i}\left( s\right) , \; \forall \,s \in \left[ 0 , \eta \right) \!. \end{aligned}$$

Observe now that, for all \(u\) in \(U_{\varepsilon } \bigcap \left[ \mu < \sigma < \mu + \eta \right] \), we obtain (cf. 3.30, 3.31)

$$\begin{aligned} \varphi ^{\prime }\left( \sigma \left( u\right) - \mu \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) = \sum _{i = 1}^{p} \varphi _{i}^{\prime }\left( \sigma \left( u\right) - \mu \right) \hbox {dist}\left( 0 , \partial \sigma \left( u\right) \right) \ge 1\!. \end{aligned}$$

This completes the proof. \(\square \)

Now we will prove the main result.

Theorem 1

(A finite length property) Suppose that \(\varPsi \) is a KL function such that Assumptions 1 and 2 hold. Let \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PALM which is assumed to be bounded. The following assertions hold.

  1. (i)

    The sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,

    $$\begin{aligned} \sum _{k = 1}^{\infty } \left\| {z^{k + 1} - z^{k}} \right\| < \infty . \end{aligned}$$
    (3.32)
  2. (ii)

    The sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) converges to a critical point \(z^{*} = \left( x^{*} , y^{*}\right) \) of \(\varPsi \).

Proof

Since \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is bounded there exists a subsequence \(\left\{ z^{k_{q}} \right\} _{q \in \mathbb N }\) such that \(z^{k_{q}} \rightarrow \overline{z}\) as \(q \rightarrow \infty \). In a similar way as in Lemma 5(i) we get that

$$\begin{aligned} \lim _{k \rightarrow \infty } \varPsi \left( x^{k} , y^{k}\right) = \varPsi \left( \overline{x} , \overline{y}\right) . \end{aligned}$$
(3.33)

If there exists an integer \(\bar{k}\) for which \(\varPsi \left( z^{\bar{k}}\right) = \varPsi \left( \overline{z}\right) \) then the decreasing property (3.13) would imply that \(z^{\bar{k} + 1} = z^{\bar{k}}\). A trivial induction show then that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is stationary and the announced results are obvious. Since \(\left\{ \varPsi \left( z^{k}\right) \right\} _{k \in \mathbb N }\) is a nonincreasing sequence, it is clear from (3.33) that \(\varPsi \left( \overline{z}\right) < \varPsi \left( z^{k}\right) \) for all \(k > 0\). Again from (3.33) for any \(\eta > 0\) there exists a nonnegative integer \(k_{0}\) such that \(\varPsi \left( z^{k}\right) < \varPsi \left( \overline{z}\right) + \eta \) for all \(k > k_{0}\). From (3.25) we know that \(\lim _{k \rightarrow \infty } \hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) = 0\). This means that for any \(\varepsilon > 0\) there exists a positive integer \(k_{1}\) such that \(\hbox {dist}\left( z^{k} , \omega \left( z^{0}\right) \right) < \varepsilon \) for all \(k > k_{1}\). Summing up all these facts, we get that \(z^{k}\) belongs to the intersection in (3.28) for all \(k > l := \max \left\{ k_{0} , k_{1} \right\} \).

  1. (i)

    Since \(\omega \left( z^{0}\right) \) is nonempty and compact (see Lemma 5(ii)), and since \(\varPsi \) is finite and constant on \(\omega \left( z^{0}\right) \) (see Lemma 5(iv)), we can apply Lemma 6 with \(\varOmega = \omega \left( z^{0}\right) \). Therefore for any \(k > l\) we have

    $$\begin{aligned} \varphi ^{\prime }\left( \varPsi \left( z^{k}\right) - \varPsi \left( \overline{z}\right) \right) \hbox {dist}\left( 0 , \partial \varPsi \left( z^{k}\right) \right) \ge 1. \end{aligned}$$
    (3.34)

    This makes sense since we know that \(\varPsi \left( z^{k}\right) > \varPsi \left( \overline{z}\right) \) for any \(k > l\). From Lemma 4 we get that

    $$\begin{aligned} \varphi ^{\prime }\left( \varPsi \left( z^{k}\right) - \varPsi \left( \overline{z}\right) \right) \ge \frac{1}{2M + 3\rho _{2}}\left\| {z^{k} - z^{k - 1}} \right\| ^{-1}. \end{aligned}$$
    (3.35)

    On the other hand, from the concavity of \(\varphi \) we get that

    $$\begin{aligned}&\varphi \left( \varPsi \left( z^{k}\right) - \varPsi \left( \overline{z}\right) \right) - \varphi \left( \varPsi \left( z^{k + 1}\right) - \varPsi \left( \overline{z}\right) \right) \nonumber \\&\quad \ge \varphi ^{\prime }\left( \varPsi \left( z^{k}\right) - \varPsi \left( \overline{z}\right) \right) \left( \varPsi \left( z^{k}\right) \!-\! \varPsi \left( z^{k + 1}\right) \right) . \end{aligned}$$
    (3.36)

    For convenience, we define for all \(p , q \in \mathbb N \) and \(\overline{z}\) the following quantities

    $$\begin{aligned} \varDelta _{p , q} : = \varphi \left( \varPsi \left( z^{p}\right) - \varPsi \left( \overline{z}\right) \right) - \varphi \left( \varPsi \left( z^{q}\right) - \varPsi \left( \overline{z}\right) \right) \!\,, \end{aligned}$$

    and

    $$\begin{aligned} C : = \frac{2\left( 2M + 3\rho _{2}\right) }{\rho _{1}} \in \left( 0 , \infty \right) \!\,. \end{aligned}$$

    Combining Lemma 3(i) with (3.35) and (3.36) yields for any \(k > l\) that

    $$\begin{aligned} \varDelta _{k , k + 1} \ge \frac{\left\| {z^{k + 1} - z^{k}} \right\| ^{2}}{C\left\| {z^{k} - z^{k - 1}} \right\| }, \end{aligned}$$
    (3.37)

    and hence

    $$\begin{aligned} \left\| {z^{k + 1} - z^{k}} \right\| ^{2} \le C\varDelta _{k , k + 1}\left\| {z^{k} - z^{k - 1}} \right\| . \end{aligned}$$

    Using the fact that \(2\sqrt{\alpha \beta } \le \alpha + \beta \) for all \(\alpha , \beta \ge 0\), we infer

    $$\begin{aligned} 2\left\| {z^{k + 1} - z^{k}} \right\| \le \left\| {z^{k} - z^{k - 1}} \right\| + C\varDelta _{k , k + 1}. \end{aligned}$$
    (3.38)

    Let us now prove that for any \(k > l\) the following inequality holds

    $$\begin{aligned} \sum _{i = l + 1}^{k} \left\| {z^{i + 1} - z^{i}} \right\| \le \left\| {z^{l + 1} - z^{l}} \right\| + C\varDelta _{l + 1 , k + 1}. \end{aligned}$$

    Summing up (3.38) for \(i = l + 1 , \ldots , k\) yields

    $$\begin{aligned} 2\sum _{i = l + 1}^{k} \left\| {z^{i + 1} - z^{i}} \right\|&\le \sum _{i = l + 1}^{k} \left\| {z^{i} - z^{i - 1}} \right\| + C\sum _{i = l + 1}^{k} \varDelta _{i , i + 1} \\&\le \sum _{i = l + 1}^{k} \left\| {z^{i + 1} - z^{i}} \right\| + \left\| {z^{l + 1} - z^{l}} \right\| + C\sum _{i = l + 1}^{k} \varDelta _{i , i + 1} \\&= \sum _{i = l + 1}^{k} \left\| {z^{i + 1} - z^{i}} \right\| + \left\| {z^{l + 1} - z^{l}} \right\| + C\varDelta _{l + 1 , k + 1} \end{aligned}$$

    where the last inequality follows from the fact that \(\varDelta _{p , q} + \varDelta _{q , r} = \varDelta _{p , r}\) for all \(p , q , r \in \mathbb N \). Since \(\varphi \ge 0\), we thus have for any \(k > l\) that

    $$\begin{aligned} \sum _{i = l + 1}^{k} \left\| {z^{i + 1} - z^{i}} \right\| \le \left\| {z^{l + 1} - z^{l}} \right\| + C\varphi \left( \varPsi \left( z^{l + 1}\right) - \varPsi \left( \overline{z}\right) \right) \!\,. \end{aligned}$$

    This easily shows that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,

    $$\begin{aligned} \sum _{k = 1}^{\infty } \left\| {z^{k + 1} - z^{k}} \right\| < \infty . \end{aligned}$$
    (3.39)
  2. (ii)

    It is clear that (3.39) implies that the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence and hence is a convergent sequence. Indeed, with \(q > p > l\) we have

    $$\begin{aligned} z^{q} - z^{p} = \sum _{k = p}^{q - 1} \left( z^{k + 1} - z^{k}\right) \end{aligned}$$

    hence

    $$\begin{aligned} \left\| {z^{q} - z^{p}} \right\| = \left\| {\sum _{k = p}^{q - 1} \left( z^{k + 1} - z^{k}\right) } \right\| \le \sum _{k = p}^{q - 1} \left\| {z^{k + 1} - z^{k}} \right\| . \end{aligned}$$

    Since (3.39) implies that \(\sum _{k = l + 1}^{\infty } \left\| {z^{k + 1} - z^{k}} \right\| \) converges to zero as \(l \rightarrow \infty \), it follows that \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) is a Cauchy sequence and hence is a convergent sequence. Now the result follows immediately from Lemma 5(i).

This completes the proof. \(\square \)

Remark 6

  1. (i)

    The boundedness assumption on the generated sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) holds in several scenarios such as when the functions \(f\) and \(g\) have bounded level sets. For a few more scenarios see [2].

  2. (ii)

    An important and fundamental case of application of Theorem 1 is when the data functions \(f , g\) and \(H\) are semi-algebraic. Observe also that the desingularizing function for semi-algebraic problems can be chosen to be of the form

    $$\begin{aligned} \varphi \left( s\right) = cs^{1 - \theta }, \end{aligned}$$
    (3.40)

    where \(c\) is positive real number and \(\theta \) belongs to \(\left[ 0 , 1\right) \) (see [1] for more details). As explained below, this fact impacts the convergence rate of the method.

If the desingularizing function \(\varphi \) of \(\varPsi \) is of the form (3.40), then, as in [1] the following estimations hold.

  1. (i)

    If \(\theta = 0\) then the sequence \(\left\{ {z}^{{k}}\right\} _{{k} \in \mathbb N }\) converges in a finite number of steps.

  2. (ii)

    If \(\theta \in \left( 0 , 1/2\right] \) then there exist \(\omega > 0\) and \(\tau \in \left[ 0 , 1\right) \) such that \(\left\| {z^{k} - \overline{z}} \right\| \le \omega \, \tau ^{k}\).

  3. (iii)

    If \(\theta \in \left( 1/2 , 1\right) \) then there exist \(\omega > 0\) such that

    $$\begin{aligned} \left\| {z^{k} - \overline{z}} \right\| \le \omega \, k^{-\frac{1 - \theta }{2\theta - 1}}. \end{aligned}$$

3.6 Extension of PALM for \(p\) blocks

The simple structure of PALM allows to extend it to the more general setting involving \(p > 2\) blocks for which Theorem 1 holds. This is briefly outlined below. Suppose that our optimization problem is now given as

$$\begin{aligned} \hbox {minimize} \left\{ \varPsi \left( x_{1} , \ldots , x_{p}\right) := \sum _{i = 1}^{p} f_{i}\left( x_{i}\right) + H\left( x_{1} , \ldots , x_{p}\right) : \; x_{i} \in \mathbb R ^{n_{i}} \right\} , \end{aligned}$$

where \(H: \mathbb R ^{N} \rightarrow \mathbb R \) with \(N = \sum _{i = 1}^{p} n_{i}\) is assumed to be \(C^{1}\) and each \(f_{i},\,i = 1 , \ldots , p\), is a proper and lower-semicontinuous function (this is exactly Assumption 1 for \(p > 2\)). We also assume that a modified version of Assumption 2 for \(p > 2\) blocks holds. In this case we denote by \(\nabla _{i} H\) the gradient of \(H\) with respect to variable \(x_{i},\,i = 1 , \ldots , p\). We denote by \(L_{i},\,i = 1 , \ldots , p\), the Lipschitz moduli of \(\nabla _{i} H\left( x_{1} , \ldots , \cdot , \ldots , x_{p}\right) \), that is, the gradient of \(H\) with respect to variable \(x_{i}\) when all \(x_{j},\,i \ne j\) (\(j = 1 , \ldots , p\)), are fixed. Similarly to Assumption 2(ii), it is clear that each \(L_{i},\,i = 1 , \ldots , p\), is a function of the \(p - 1\) variables \(x_{j},\,j \ne i\) (\(j = 1 , \ldots , p\)).

For simplicity of the presentation of PALM for the case of \(p > 2\) blocks we will use the following notations. Denote \(x^{k} = \left( x_{1}^{k} , x_{2}^{k} , \ldots , x_{p}^{k}\right) \) and

$$\begin{aligned} x^{k}(i) = \left( x_{1}^{k + 1} , x_{2}^{k + 1} , \ldots , x_{i - 1}^{k + 1} , x_{i}^{k + 1} , x_{i + 1}^{k} , \ldots , x_{p}^{k}\right) \!. \end{aligned}$$

Therefore \(x^{k}(0) = \left( x_{1}^{k} , x_{2}^{k} \ldots , x_{p}^{k}\right) = x^{k}\) and \(x^{k}(p) = \left( x_{1}^{k + 1} , x_{2}^{k + 1} , \ldots , x_{p}^{k + 1}\right) = x^{k + 1}\).

In this case the algorithm PALM minimizes \(\varPsi \) with respect to each \(x_{1} , \ldots , x_{p}\), taken in cyclic order while fixing the previous computed iterate. More precisely, starting with any \(\left( x_{1}^{0} , x_{2}^{0} , \ldots , x_{p}^{0}\right) \in \mathbb R ^{N}\), PALM generates a sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) via the following successively scheme:

$$\begin{aligned} x_{i}^{k + 1} \in \hbox {prox}_{c_{i}^{k}}^{f_{i}}\left( x_{i}^{k} - \frac{1}{c_{i}^{k}}\nabla _{i} H\left( x^{k}(i - 1)\right) \right) , \quad i = 1 , 2 , \ldots , p, \end{aligned}$$

where \(c_{i}^{k} = \gamma _{i}L_{i}\) and \(\gamma _{i} > 1\). Theorem 1 can then be applied for the \(p\)-blocks version of PALM.

3.7 The proximal forward–backward scheme

When there is no \(y\) term, PALM reduces to PFB. In this case we have \(\varPsi \left( x\right) : = f\left( x\right) + h\left( x\right) \) (where \(h\left( x\right) \equiv H\left( x , 0\right) \)), and the proximal forward–backward scheme for minimizing \(\varPsi \) can simply be viewed as the proximal regularization of \(h\) linearized at a given point \(x^{k}\), i.e.,

$$\begin{aligned} x^{k + 1} \in \mathrm{argmin }_{x \in \mathbb R ^{n}} \left\{ \left\langle {x - x^{k} , \nabla h\left( x^{k}\right) } \right\rangle + \frac{t_{k}}{2}\left\| {x - x^{k}} \right\| ^{2} + f\left( x\right) \right\} \!\,. \end{aligned}$$

A convergence result for the PFB scheme was first proved in [3] via the abstract framework developed in that paper. Our approach allows for a simpler and more direct proof. The sufficient decrease property of the sequence \(\left\{ \varPsi \left( x^{k}\right) \right\} _{k \in \mathbb N }\) follows directly from Lemma 2 with \(\sigma := f\) and \(t := t_{k} > L_{h}\). The second property “a subgradient lower bound for the iterates gap” follows from the Lipschitz continuity of \(\nabla h\). Now the globally convergent result follows immediately from Theorem 1. For the sake of completeness we record the result in the following proposition.

Proposition 3

(A convergence result of PFB) Let \(h : \mathbb R ^{d} \rightarrow \mathbb R \) be a continuously differentiable function with gradient \(\nabla h\) assumed \(L_{h}\)-Lipschitz continuous and let \(f : \mathbb R ^{d} \rightarrow \left( -\infty , +\infty \right] \) be a proper and lower semicontinuous function with \(\inf _\mathbb{R ^{d}} f > -\infty \). Assume that \( f+h\) is a KL function. Let \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) be a sequence generated by PFB which is assumed to be bounded and let \(t_{k} > L_{h}\). The following assertions hold.

  1. (i)

    The sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) has finite length, that is,

    $$\begin{aligned} \sum _{k = 1}^{\infty } \left\| {x^{k + 1} - x^{k}} \right\| < \infty . \end{aligned}$$
  2. (ii)

    The sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) converges to a critical point \(x^{*}\) of \(f+h\).

It is well-known that PFB reduces to the projected gradient method (PGM) when \(f = \delta _{X}\) (where \(X\) is a nonempty, closed and nonconvex subset of \(\mathbb R ^{d}\)), i.e., PGM generates a sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) via

$$\begin{aligned} x^{k + 1} \in P_{X}\left( x^{k} - \frac{1}{t_{k}}\nabla h\left( x^{k}\right) \right) . \end{aligned}$$

Thus when \(h + \delta _{X}\) is a KL function and \(h \in C_{L_{h}}^{1,1}\), global convergence of the sequence \(\left\{ {x}^{{k}}\right\} _{{k} \in \mathbb N }\) generated by PGM follows from Proposition 3, and recovers the result established in [3].

4 An application to matrix factorization problems

Matrix factorization problems play a fundamental role in data analysis and can be found in many disparate applications. A very large body of literature covers this active research area; for a recent account we refer for example to the book [18] and references therein.

In this section we show how PALM can be applied to a broad class of such problems to produce a globally convergent algorithm.

4.1 A broad class of matrix factorization problems

Let \(p, q, m, n\) and \(r\) be given integers. Define the following sets in the space of real matrices

$$\begin{aligned} \mathcal K _{p , q}&= \left\{ M \in \mathbb R ^{p \times q} : \; M \ge 0 \right\} \!, \\ \mathcal F&= \left\{ X \in \mathbb R ^{m \times r} : \; R_{1}\left( X\right) \le \alpha \right\} \!, \\ \mathcal G&= \left\{ Y \in \mathbb R ^{r \times n} : \; R_{2}\left( Y\right) \le \beta \right\} \!\,, \end{aligned}$$

where \(R_{1}\) and \(R_{2}\) are lower semicontinuous functions and \(\alpha , \beta \in \mathbb R _{+}\) are given parameters.

Roughly speaking, the matrix factorization (or approximation) problem consists in finding a product decomposition of a given matrix satisfying certain properties.

4.1.1 The problem

Given a matrix \(A \in \mathbb R ^{m \times n}\) and let \(r\) be an integer which is much smaller than \(\min \left\{ m , n \right\} \), find two matrices \(X \in \mathbb R ^{m \times r}\) and \(Y \in \mathbb R ^{r \times n}\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} A \approx XY, \\ X \in \mathcal K _{m , r} \cap \mathcal F , \\ Y \in \mathcal K _{r , n} \cap \mathcal G . \end{array}\right. \end{aligned}$$

The functions \(R_{1}\) and \(R_{2}\) are often used to describe some additional features of the matrices \(X\) and \(Y\), respectively, arising in a specific application at hand (see more below).

To solve the problem, we adopt the optimization approach, that is, we consider the nonconvex and nonsmooth minimization problem

$$\begin{aligned} (MF) \quad \min \left\{ d\left( A , XY\right) : \; X \in \mathcal K _{m , r} \cap \mathcal F , Y \in \mathcal K _{r , n} \cap \mathcal G \right\} \!\,, \end{aligned}$$

where \(d : \mathbb R ^{m \times n}\times \mathbb R ^{m \times n} \rightarrow \mathbb R _{+}\) stands as a proximity function measuring the quality of the approximation, satisfying \(d\left( U, V\right) = 0\) if and only if \(U = V\). Note that \(d\left( \cdot , \cdot \right) \) is not necessarily symmetric and is not a metric.

Another way to formulate \((MF)\) is to consider its penalized version where the “hard” constraints are the candidates to be penalized, i.e., we consider the following penalized problem

$$\begin{aligned} (P\!\!-\!\!M\!F) \quad \min \left\{ \mu _{1}R_{1}\left( X\right) + \mu _{2}R_{2}\left( Y\right) + d\left( A , XY\right) : X \in \mathcal K _{m , r} , Y \in \mathcal K _{r , n} \right\} \!\,, \end{aligned}$$

where \(\mu _{1}\) and \(\mu _{2} > 0\) are penalty parameters. However, note that the penalty approach requires the tuning of the unknown penalty parameters which might be a difficult issue.

Both formulations can be written in the form of our general Problem \((M)\) with the obvious identifications for the corresponding \(H , f\) and \(g\), e.g.,

$$\begin{aligned} \min \left\{ \varPsi \left( X , Y\right) : = f\left( X\right) + g\left( Y\right) + H\left( X , Y\right) : \; X \in \mathbb R ^{m \times r} , Y \in \mathbb R ^{r \times n} \right\} \!, \end{aligned}$$

where

$$\begin{aligned} \hbox {MF-Constrained} \; \varPsi _{c}\left( X , Y\right)&:= \delta _\mathcal{K _{m , r} \cap \mathcal F }\left( X\right) + \delta _\mathcal{K _{r , n} \cap \mathcal G }\left( Y\right) + d\left( A , XY\right) \!, \\ \hbox {MF-Penalized} \; \varPsi _{p}\left( X , Y\right)&:= \mu _{1}R_{1}\left( X\right) + \delta _\mathcal{K _{m , r}}\left( X\right) + \mu _{2}R_{2}\left( Y\right) \\&+\delta _\mathcal{K _{r , n}}\left( Y\right) + d\left( A , XY\right) \!. \end{aligned}$$

Thus, assuming that Assumptions 1 and 2 hold for the problem data quantified here via \([d, \mathcal{F}, \mathcal{G}]\), and that the functions \(d, R_{1}\) and \(R_{2}\) are KL functions, we can apply PALM and Theorem 1 to produce a globally convergent scheme to a critical point of \(\varPsi \) that solves the \((MF)\) problem. The above does not seem to have been addressed in the literature within such a general formalism. It covers a multitude of possible formulations from which many algorithms can be conceived by appropriate choices of the triple \([d, \mathcal{F}, \mathcal{G}]\) within a given application at hands. This is illustrated next on an important class of problems.

4.2 An algorithm for the sparse nonnegative matrix factorization

To be specific, in the sequel we focus on the classical case where the proximity measure is defined via the Frobenius norm

$$\begin{aligned} d\left( A , XY\right) = \frac{1}{2}\left\| {A - XY} \right\| _{F}^{2} \end{aligned}$$

and for any matrix \(M\), the Frobenius norm is defined by

$$\begin{aligned} \left\| {M} \right\| _{F}^{2} = \sum _{i , j} m_{ij}^{2} = \text{ Tr }\left( MM^{T}\right) = \text{ Tr }\left( M^{T}M\right) = \left\langle {M , M} \right\rangle \!\,, \end{aligned}$$

where \(\text{ Tr }\) is the Trace operator. Many other proximity measures can also be used, such as entropy-like distances, see e.g., [18] and references therein.

Example 1

(Nonnegative matrix factorization) With \(\mathcal F = \mathbb R ^{m \times r}\) and \(\mathcal G = \mathbb R ^{r \times n}\), the Problem \((MF)\) reduces to the so called Nonnegative Matrix Factorization (NMF) problem

$$\begin{aligned} \min \left\{ \frac{1}{2}\left\| {A - XY} \right\| _{F}^{2} : X \ge 0 , Y \ge 0 \right\} \!\,. \end{aligned}$$

The nonnegative matrix factorization [23] has been at the heart of intense research applied to a variety of applications (see, e.g., [14] for applications in signal processing). More recently the introduction of “sparsity” has been of particular importance, and variants of NMF involving sparsity has also been considered in the literature (see, e.g., [20, 21]). Many, if not most, algorithms are based on the Gauss-Seidel like method for solving the NMF problem, see e.g., [11, 18, 24], and with quite limited convergence results. Moreover, extended versions of NMF with sparsity were considered via relaxations and corresponding convex re-formulations solved by sophisticated and computationally demanding conic programming schemes, see e.g., [20, 21].

To illustrate the benefit of our approach, we now show how PALM can be applied to solve directly the more difficult constrained nonconvex and nonsmooth sparse nonnegative matrix factorization problem “as is”, and produces a simple convergent scheme.

First we note that the objective function \(d\left( A , XY\right) \!:=\! H\left( X , Y\right) \!=\! (1/2)\left\| {A \!-\! XY} \right\| _{F}^{2}\) is a real polynomial function hence semi-algebraic; moreover, both functions \(X \rightarrow H\left( X , Y\right) \) (for fixed \(Y\)) and \(Y \rightarrow H\left( X , Y\right) \) (for fixed \(X\)), are \(C^{1,1}\). Indeed we have

$$\begin{aligned} X\rightarrow \nabla _{X} H\left( X , Y\right) = \left( XY - A\right) Y^{T} \quad \text{ and } \quad Y\rightarrow \nabla _{Y} H\left( X , Y\right) = X^{T}\left( XY - A\right) \end{aligned}$$

which are Lipschitz continuous with \(L_{1}(Y) \equiv \left\| {YY^{T}} \right\| _{F}\) and \(L_{2}(X) \equiv \left\| {X^{T}X} \right\| _{F}\) as Lipschitz modulis, respectively.

As a specific case, let us now consider the overall sparsity measure of a matrix defined by

$$\begin{aligned} R_{1}\left( X\right) = \left\| {X} \right\| _{0} := \sum _{i} \left\| {x_{i}} \right\| _{0}, \;( x_{i} \; \text{ column } \text{ vector } \text{ of }\; X) \end{aligned}$$

which counts the number of nonzero elements in the matrix \(X\). Similarly \(R_{2}\left( Y\right) = \left\| {Y} \right\| _{0}\).

As shown in Example 3 (see the Appendix) both functions \(R_{1}\) and \(R_{2}\) are semi-algebraic. Thanks to the properties of semi-algebraic functions (see the Appendix) it follows that \(\varPsi _{c}\) is semi-algebraic and PALM could be applied to produce a globally convergent algorithm. However, to apply PALM properly, we need to compute the proximal map of the nonconvex function \(\left\| {X} \right\| _{0}\) on \(X \ge 0\) for some given matrix \(U\). It turns out that this can be done effectively, as the next proposition shows. Our result makes use of the following operator (see, e.g., [26]).

Definition 4

Given any matrix \(U \in \mathbb R ^{m \times n}\), define the operator \(T_{s} : \mathbb R ^{m \times n} \rightrightarrows \mathbb R ^{m \times n}\) by

$$\begin{aligned} T_{s}\left( U\right) := \mathrm{argmin }_{V \in \mathbb R ^{m \times n}} \left\{ \left\| {U - V} \right\| _{F}^{2} : \; \left\| {V} \right\| _{0} \le s \right\} \!\,. \end{aligned}$$

Observe that the operator \(T_{s}\) is in general multi-valued. For a given matrix \(U\), it is actually easy to see that the elements of \(T_{s}\left( U\right) \) are obtained by choosing exactly \(s\) indices corresponding the \(s\) first largest entries (in absolute value) of \(U\) and by setting \(\left( T_{s}\left( U\right) \right) _{ij} = U_{ij}\) for such indices and \(\left( T_{s}\left( U\right) \right) _{ij} = 0\) otherwise. The multi-valuedness of \(T_{s}\) comes from the fact that the \(s\) largest entries may not be uniquely defined.

Since computing \(T_{s}\) only requires determining the \(s\)th largest numbers of a matrix of \(mn\) numbers, this can be done in \(\mathcal O \left( mn\right) \) time [13] and zeroing out the proper entries in one more pass of the \(mn\) numbers.

We define the usual projection map onto \(\mathbb R _{+}^{m \times n}\) by

$$\begin{aligned} P_{+}\left( U\right) := \mathrm{argmin }_{V \in \mathbb R ^{m \times n}} \left\{ \left\| {U - V} \right\| _{F}^{2} : \; V \ge 0 \right\} = \max \left\{ 0, U \right\} \!\,, \end{aligned}$$

where the \(\max \) operation is taken componentwise.

Proposition 4

(Proximal map formula) Let \(U \in \mathbb R ^{m \times n}\) and let \(f: = \delta _{X \ge 0} + \delta _{\left\| {X} \right\| _{0} \le s}\). Then

$$\begin{aligned} \hbox {prox}_{1}^{f}\left( U\right) = \mathrm{argmin }\left\{ \frac{1}{2}\left\| {X - U} \right\| _{F}^{2} : X \ge 0 , \left\| {X} \right\| _{0} \le s \right\} = T_{s}\left( P_{+}\left( U\right) \right) \end{aligned}$$

where \(T_{s}\) is defined in Definition 4.

Proof

Given any matrix \(U \in \mathbb R ^{m \times n}\), let us introduce the following notations

$$\begin{aligned} \left\| {X} \right\| _{+}^{2} = \sum _{(i , j) \in \mathcal{I}^{+}} X_{ij}^{2} \quad \text {and} \quad \left\| {X} \right\| _{-}^{2} = \sum _{ (i , j) \in \mathcal{I}^{-}} X_{ij}^{2}, \end{aligned}$$

where

$$\begin{aligned} \mathcal{I}^{+}= \left\{ \left( i , j\right) \in \left\{ 1 , \ldots , m \right\} \times \left\{ 1 , \ldots , n \right\} : \; U_{ij} \ge 0 \right\} \end{aligned}$$

and

$$\begin{aligned} \mathcal{I}^{-}= \left\{ \left( i , j\right) \in \left\{ 1 , \ldots , m \right\} \times \left\{ 1 , \ldots , n \right\} : \; U_{ij} < 0 \right\} \!. \end{aligned}$$

Observe that the following relations hold

$$\begin{aligned} (i) \quad \left\| {X} \right\| _{F}^2 = \left\| {X} \right\| _{+}^{2} +\left\| {X} \right\| _{-}^{2} \quad (ii) \quad \left\| {X - U} \right\| _{+}^{2} + \left\| {X} \right\| _{-}^{2} = \left\| {X - P_{+}\left( U\right) } \right\| _{F}^{2} \end{aligned}$$

and

$$\begin{aligned} (iii) \quad \left\| {X} \right\| _{-}^{2} = 0 \, \Leftrightarrow \, X_{ij} = 0 \quad \forall \,\left( i , j\right) \in \mathcal{I}^{-}\!, \end{aligned}$$

where the second relation follows from relation (i) and the fact that \(\left( P_{+}\left( U\right) \right) _{ij} = U_{ij}\) for any \((i , j) \in \mathcal{I}^{+}\) and \(\left( P_{+}\left( U\right) \right) _{ij} = 0\) for any \((i , j) \in \mathcal{I}^{-}\).

From the above fact (i), we thus have that \(\bar{X} \in \hbox {prox}_{1}^{f}\left( U\right) \) if and only if

$$\begin{aligned} \bar{X}&\in \mathrm{argmin }\left\{ \left\| {X - U} \right\| _{F}^{2} : \; X \ge 0, \; \left\| {X} \right\| _{0} \le s \right\} \nonumber \\&= \mathrm{argmin }\left\{ \left\| {X - U} \right\| _{+}^{2} + \left\| {X - U} \right\| _{-}^{2} : \; X \ge 0, \; \left\| {X} \right\| _{0} \le s \right\} \nonumber \\&= \mathrm{argmin }\left\{ \left\| {X \!-\! U} \right\| _{+}^{2} \!+\! \left\| {X} \right\| _{-}^{2} \!-\! 2\sum _{\left( i , j\right) \in \mathcal{I}^{-}} X_{ij}U_{ij} : \; X \!\ge \! 0, \; \left\| {X} \right\| _{0} \!\le \! s \right\} \end{aligned}$$
(4.1)
$$\begin{aligned}&= \mathrm{argmin }\left\{ \left\| {X - U} \right\| _{+}^{2} : \; X_{ij} = 0 \quad \forall \,\left( i , j\right) \in \mathcal{I}^{-}, \; X \ge 0, \; \left\| {X} \right\| _{0} \le s \right\} \!, \end{aligned}$$
(4.2)

where the last equality follows from the fact that every solution of (4.2) is clearly a solution of (4.1), while the converse implication follows by a simple contradiction argument. Arguing in a similar way, one can see that the constraint \(X\ge 0\) in problem (4.2) can be removed without affecting the optimal solution of that problem. Thus, recalling the facts (ii) and (iii) we obtain

$$\begin{aligned} \bar{X}&\in \mathrm{argmin }\left\{ \left\| {X - U} \right\| _{+}^{2} : \left\| {X} \right\| _{-}^{2}=0, \; \left\| {X} \right\| _{0} \le s \right\} \\&= \mathrm{argmin }\left\{ \left\| {X - U} \right\| _{+}^{2} + \left\| {X} \right\| _{-}^{2} : \; \left\| {X} \right\| _{0} \le s \right\} \\&= \mathrm{argmin }\left\{ \left\| {X - P_{+}\left( U\right) } \right\| _{F}^{2} : \; \left\| {X} \right\| _{0} \le s \right\} = T_{s}\left( P_{+}\left( U\right) \right) \!, \end{aligned}$$

where the last equality is by the definition of \(T_{s}\) (see Definition 4). \(\square \)

With \(R_{1} := \delta _{X \ge 0} + \delta _{\left\| {X} \right\| _{0} \le \alpha }\) and \(R_{2} := \delta _{Y \ge 0} + \delta _{\left\| {Y} \right\| _{0} \le \beta }\), we now have all the ingredients to apply PALM and formulate explicitly a simple algorithm for the sparse nonnegative matrix factorization problem.

figure b

Remark 7

  1. (i)

    Observe that PALM-Sparse NMF requires that the Lipschitz modulis \(\left\| {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\| _{F}\) and \(\left\| {Y^{k}\left( Y^{k}\right) ^{T}} \right\| _{F}\) remain bounded away from zero. This means equivalently that we assume that

    $$\begin{aligned} \inf _{k \in \mathbb N } \left\{ \left\| {X^{k}} \right\| _{F} , \left\| {Y^{k}} \right\| _{F} \right\} > 0. \end{aligned}$$

    In view of Remark 3(iii), we could avoid this assumption by introducing a safeguard \(\nu > 0\) and simply replacing the Lipschitz modulis in PALM-Sparse NMF by

    $$\begin{aligned} \max \left( \nu , \left\| {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\| _{F}\right) \quad \text {and} \quad \max \left( \nu , \left\| {Y^{k}\left( Y^{k}\right) ^{T}} \right\| _{F}\right) . \end{aligned}$$
  2. (ii)

    Note that the easier nonnegative matrix factorization problem given in Example 1 is a particular instance of the sparse NMF and in that case both operators \(T_{\alpha }\) and \(T_{\beta }\) reduce to the identity operators. Hence, the computation in Step 2.1. for NMF reduces to

    $$\begin{aligned} X^{k + 1} = P_{+}\left( U^{k}\right) \end{aligned}$$

    where \(U^{k}\) is given in (4.3) (similarly for \(Y^{k + 1}\)). Moreover, since in that case the constraints set \(\mathcal K _{m , r}\) and \(\mathcal K _{r , n}\) are closed and convex, it follows from Remark 4(iii) that we can set \(c_{k} = \left\| {Y^{k}\left( Y^{k}\right) ^{T}} \right\| _{F}\) and \(d_{k} =\left\| {X^{k + 1}\left( X^{k + 1}\right) ^{T}} \right\| _{F}\) in that case.

The assumptions required to apply PALM are clearly satisfied and hence we can use Theorem 1 in order to obtain that the generated sequence is globally convergent to a critical point of the Sparse NMF problem (and similarly for NMF, as a special case). We record this in the following theorem.

Theorem 2

Let \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) be a sequence generated by PALM-Sparse NMF which is assumed to be bounded and to satisfy \(\inf _{k \in \mathbb N }\left\{ \left\| {X^{k}} \right\| _{F} , \left\| {Y^{k}} \right\| _{F} \right\} > 0\). Then,

  1. (i)

    The sequence \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) has finite length, that is

    $$\begin{aligned} \sum _{k = 1}^{\infty } \left\| {X^{k + 1} - X^{k}} \right\| _{F} + \left\| {Y^{k + 1} - Y^{k}} \right\| _{F} < \infty . \end{aligned}$$
  2. (ii)

    The sequence \(\left\{ \left( X^{k} , Y^{k}\right) \right\} _{k \in \mathbb N }\) converges to a critical point \(\left( X^{*} ,Y^{*}\right) \) of the Sparse NMF.