Sequential optimality conditions for cardinality-constrained optimization problems with applications

Recently, a new approach to tackle cardinality-constrained optimization problems based on a continuous reformulation of the problem was proposed. Following this approach, we derive a problem-tailored sequential optimality condition, which is satisfied at every local minimizer without requiring any constraint qualification. We relate this condition to an existing M-type stationary concept by introducing a weak sequential constraint qualification based on a cone-continuity property. Finally, we present two algorithmic applications: We improve existing results for a known regularization method by proving that it generates limit points satisfying the aforementioned optimality conditions even if the subproblems are only solved inexactly. And we show that, under a suitable Kurdyka–Łojasiewicz-type assumption, any limit point of a standard (safeguarded) multiplier penalty method applied directly to the reformulated problem also satisfies the optimality condition. These results are stronger than corresponding ones known for the related class of mathematical programs with complementarity constraints.


Introduction
We consider cardinality-constrained (CC) optimization problems of the form where f ∈ C 1 (ℝ n , ℝ) , g ∈ C 1 (ℝ n , ℝ m ) , h ∈ C 1 (ℝ n , ℝ p ) , and ‖x‖ 0 denotes the number of nonzero components of a vector x. Throughout this paper, we assume that s < n since the cardinality constraint would otherwise be superfluous.
This class of problems has attracted great interest in recent years due to its abundance of applications including portfolio optimization [8,9,11] and statistical regression [8,14]. It should be noted, however, that these problems are difficult to solve, mainly due to the presence of the cardinality constraint defined by the mapping ‖ ⋅ ‖ 0 which, in spite of the notation used here, does not define a norm and is not even continuous. Even testing the feasibility of (1.1) is known to be NP-complete [8].
One way to attack these problems is to reformulate them as mixed-integer problems. This reformulation is the backbone of many algorithms employing ideas from discrete optimization, see for example [8,9,13,24,30,32].
A new approach to solve this type of problems was introduced recently in [12], see also [15] for a similar approach in the context of sparse optimization. There, (1.1) is reformulated as a continuous optimization problem with orthogonality-type constraints, for which first-order stationarity concepts called CC-M-and CC-S-stationarity are derived. However, in order to guarantee that these stationarity conditions hold at a local minimizer of (1.1), one needs a constraint qualification. The regularization method from [17] is adapted to solve the reformulated problem and it is shown that any limit point of this method satisfies the CC-M-stationarity condition provided that a constraint qualification called CC-CPLD holds at this limit point. Nevertheless, this convergence result is only proven for the exact case, i.e., under the assumption that an exact KKT point of the regularized subproblem can be computed in each iteration. Numerically, however, this is rarely the case. In the context of mathematical programs with complementarity constraints (MPCC for short), it is known that, if we take inexactness into account, then the convergence theory for the this regularization method (like for most other regularization techniques) is weakened significantly [18].
Let us now describe the contributions of our paper. We first derive a sequential optimality condition called CC-AM-stationarity for (1.1), which is the CC-analogue of the approximate Karush-Kuhn-Tucker (AKKT) condition for standard nonlinear optimization problems (NLP) introduced in [3,10,25], see also [6,26] for similar concepts in the context of MPCCs. We show that this first-order necessary optimality condition is satisfied at every local minimizer of (1.1) without requiring a constraint qualification. In order to establish the relationship between CC-AM-stationarity and the CC-M-stationarity condition introduced in [12,31], we then propose a constraint qualification called CC-AM-regularity based on a cone-continuity property. This constraint qualification is the CC-analogue of the AKKT-regularity introduced in [4,5,10]. Same as CC-M-stationarity, both new concepts CC-AM-stationarity and CC-AM-regularity (1.1) min x f (x) s.t. g(x) ≤ 0, h(x) = 0, ‖x‖ 0 ≤ s, 1 3 Sequential optimality conditions for cardinality-constrained… depend only on the original cardinality-constrained problem (1.1) and not on the auxiliary variable introduced in the continuous reformulation. Subsequently, we prove that any limit point of the regularization method introduced in [12,17] satisfies the CC-AM-stationarity condition in both the exact and inexact case, i.e., also in the situation where the resulting NLP-subproblems are solved only inexactly. This indicates that the application of these methods for CC does not suffer from any drawback when we take inexactness into account, in contrast to the MPCC case. Finally, we show that, under a suitable Kurdyka-Łojasiewicz-type assumption, any limit point of a standard (safeguarded) augmented Lagrangian method [1,10] applied directly to the reformulated problem also satisfies CC-AM-stationarity, see also [6] for a similar result obtained in the context of MPCCs. Since numerical results for the methods investigated here can already be found in some other papers [12,21], our focus is on the theoretical background of these approaches.
The paper is organized as follows: We first recall some basic definitions and results in Sect. 2. Then we introduce the problem-tailored sequential optimality condition and related constraint qualification in Sects. 3 and 4, respectively. These sequential optimality conditions are then applied, in Sects. 5 and 6, to the regularization method and the augmented Lagrangian approach. We close with some final remarks in Sect. 7. There is also an appendix where we compare our sequential optimality condition with an existing one from [22], which is formulated specifically for the continuous reformulation, see Sect. 1.
Notation: For a given vector x ∈ ℝ n , we define Clearly we have {1, … , n} = I ± (x) ∪ I 0 (x) and I ± (x) ∩ I 0 (x) = � . Note that these definitions imply ‖x‖ 0 = �I ± (x)� . Given a set C ⊆ ℝ n , we denote the corresponding polar cone by C • ∶= {y ∈ ℝ n | y T x ≤ 0 for all x ∈ C} . We write B r (x) and B r (x) for an open and closed ball with radius r > 0 around x.

Preliminaries
We first recall some basic definitions, cf. [27] for more details. For a multifunction Γ ∶ ℝ l ⇉ ℝ q the Painlevé-Kuratowski outer/upper limit of Γ(z) at ẑ ∈ ℝ l is defined as For a nonempty and closed set A ⊆ ℝ n and a point x ∈ A the Bouligand tangent cone and the Fréchet normal cone to A at x are given by

3
The Fréchet normal cone for a set of particular interest in our framework is stated in the following result, whose proof follows from straightforward computations.
and (x, y) ∈ C . Then we have Next, let us take a closer look at (1.1) and follow the approach introduced in [12]. To simplify the notation, we define the set X ∶= {x ∈ ℝ n | g(x) ≤ 0, h(x) = 0} . Now consider x ∈ ℝ n , and define a corresponding y ∈ ℝ n by setting y i ∶= 0 for i ∈ I ± (x) and y i ∶= 1 for i ∈ I 0 (x) . Then ‖x‖ 0 = n − e T y , where e ∶= (1, … , 1) T ∈ ℝ n . This leads to the following mixed-integer problem and its relaxation where • denotes the Hadamard product. Note that (2.2) slightly differs from the continuous reformulation in [12] since we drop the constraint y ≥ 0 here, which leads to a larger feasible set. Nevertheless, it is easy to see that all results obtained in Sect. 3 of [12] are applicable to our reformulation here as well. Let us now gather these results, cf. [12] for the proofs. Theorem 2.2 Let x ∈ ℝ n . Then the following statements hold: x is a global minimizer of (1.1) if and only if there exists ŷ ∈ ℝ n such that (x,ŷ) is a global minimizer of (2.2). (c) If x ∈ ℝ n is a local minimizer of (1.1), then there exists ŷ ∈ ℝ n such that (x,ŷ) is a local minimizer of (2.2). Conversely, if (x,ŷ) is a local minimizer of (2.2) satisfying ‖ŷ‖ 0 = s , then x is a local minimizer of (1.1).
Note that the extra condition for the converse statement in Theorem 2.2 (c) is necessary, in general, see [12,Example 3] for a counterexample.
We close this section by noting that, occasionally, some constraint qualifications defined in [31, Definition 3.5] will play some role within this paper. In particular, this includes the CC-ACQ and CC-GCQ condition, which are problem-tailored modifications of the standard Abadie and Guignard CQs, respectively. Since their exact definitions require some overhead and the details are not relevant in our

3
Sequential optimality conditions for cardinality-constrained… context, we refrain from stating their definitions here. We only stress that these are fairly mild constraint qualifications.

A sequential optimality condition
Sequential optimality conditions like the AKKT conditions for NLPs have become very popular during the last few years, see [10]. In principle, these AKKT conditions can also be applied to the optimization problem (2.2) by viewing this program as an NLP. But then too many points satisfy the AKKT property, see [22,Thm. 4.1], so that the AKKT conditions turn out to be an optimality condition, which is too weak for this problem (i.e., besides the local minima, many other feasible points satisfy the standard AKKT conditions). This means that suitable problem-tailored sequential optimality conditions are required for cardinality-constrained and related problems with "difficult" constraints.
This was done, for example, in [23] for a very general class of problems. The concept there is based on the limiting normal cone and can, in principle, be specialized to our setting. Instead of recalling this general theory and then specializing the corresponding concepts, we decided to use a direct and very elementary approach in this (and the subsequent) section. We stress that our definition is based on the original problem (1.1) in the x-space. The recent report [22] also introduces a sequential optimality condition for cardinality-constrained programs which, however, is essentially based on the reformulated problem (2.2) in the (x, y)-space. Nevertheless, it turns out that our formulation is, in some sense, equivalent to the notion from [22]. Since this equivalence is not exploited in our subsequent analysis, we discuss the details in an appendix, see Sect. 1. Definition 3.1 Let x ∈ ℝ n be a feasible point of (1.1). We say that x is CC approximately M-stationary (CC-AM-stationary), if there exist sequences as well as k i = 0 for all i ∈ I ± (x) for all k ∈ ℕ.
Note that the two requirements k i = 0 and k i = 0 are assumed to hold for all k ∈ ℕ . Subsequencing if necessary, it is easy to see that this is equivalent to forcing these multiplier estimates to be zero only for all k ∈ ℕ sufficiently large. We further stress that Definition 3.1 makes no assumptions regarding the boundedness of the multiplier estimates.
If we define W ∶= {(x, y) ∈ ℝ n × ℝ n | x•y = 0} , then the feasible set Z of (2.2) has the form The following theorem shows that CC-AM-stationarity is a first-order necessary optimality condition for (1.1) without the need for some kind of constraint qualification.
Theorem 3.2 Let x ∈ ℝ n be a local minimizer of (1.1). Then x is a CC-AM-stationary point.
Proof Since x is a local minimizer of (1.1), by Theorem 2.2, there exists ŷ ∈ ℝ n such that (x,ŷ) is a local minimizer of (2.2). Hence, we can find an > 0 such that Obviously (x,ŷ) is then the unique global minimizer of Now pick a sequence { k } ⊆ ℝ + such that {a k } ↑ ∞ , and consider for each k ∈ ℕ the partially penalized and localized problem where The objective function of (3.2) is continuously differentiable for all k ∈ ℕ . Furthermore, the feasible set B ((x,ŷ)) ∩ W is nonempty and compact. Hence, for each k ∈ ℕ , (3.2) admits a global minimizer (x k , y k ) . We thus have a sequence {(x k , y k )} in the compact set B ((x,ŷ)) ∩ W and can thus assume w.l.o.g. that {(x k , y k )} converges, i.e., there exists (x,ȳ) ∈ B ((x,ŷ)) ∩ W such that {(x k , y k )} → (x,ȳ) . We now want to show that (x,ȳ) = (x,ŷ) . Since (x,ŷ) ∈ Z , it is a feasible point of (3.2) for each k ∈ ℕ with (x,ŷ) = 0 . Thus, we obtain for each k ∈ ℕ that

3
Sequential optimality conditions for cardinality-constrained… Dividing (3.3) by k and letting k → ∞ yields (x,ȳ) ≤ 0 . This implies that (x,ȳ) ∈ B ((x,ŷ)) ∩ Z and therefore, it is feasible for (3.1). Furthermore, we also obtain from (3.3) that and hence, by letting k → ∞, Since (x,ŷ) is the unique global solution of (3.1), we then necessarily have , we can assume w.l.o.g. that, for each k ∈ ℕ , we have g i (x k ) < 0 , which in turn implies that max{0, g i (x k )} = 0 and hence, in particu- Using the definition of k and {x k } →x , we obtain This completes the proof. ◻ It is also possible to bypass the continuous reformulation (2.2) and prove Theorem 3.2 directly based on the original problem (1.1), using techniques from variational analysis. The reason why we did not do that here, is that the above proof also shows that every local minimizer of (2.2) is a CC-AM-stationary point. Now recall that (2.2) can have local minimizers, which are not a local minimizers of (1.1), see e.g. [12,Example 3]. This immediately implies that CC-AM-stationary points are not necessarily local minimizers of (1.1), i.e. the converse of Theorem 3.2 is false in general.
We close this section by considering the special case X = ℝ n , i.e., we have the problem In [7], a first-order necessary optimality condition for (3.5) called basic feasibility was introduced, see the reference for details. Here we only note that the notion of basic feasibility can be shown to be identical to our CC-AM-stationarity at any feasible point x satisfying ‖x‖ 0 = s , i.e., these two optimality conditions coincide in the interesting case, where the cardinality constraint is active.

A cone-continuity-type constraint qualification
Let x ∈ ℝ n be feasible for (1.1). Then we define for each x ∈ ℝ n the cone Note that the index sets I g (x) and I ± (x) depend on x and not x. With this cone, we can translate Definition 3.1 into the language of variational analysis, see also [4].
Proof "⇒ ": By assumption, there exist sequences Sequential optimality conditions for cardinality-constrained… Clearly, we have {w k } → −∇f (x) . Moreover, by the last two conditions in Definition 3.1, we also have w k ∈ K̂x(x k ) for each k ∈ ℕ . Hence, we have −∇f (x) ∈ lim sup x→x K̂x(x).
"⇐ ": By assumption, there exist sequences . For these multipliers, we obtain Thus, x is a CC-AM-stationary point. ◻ Let us now recall the CC-M-stationary concept introduced in [12], where it was shown to be a first-order optimality condition for (1.1) under suitable assumptions.

Definition 4.2 Let
x ∈ ℝ n be a feasible point of (1.1). We then say that x is CC-Mstationary, if there exist multipliers ∈ ℝ m + , ∈ ℝ p , and ∈ ℝ n such that The following translation is then obvious.
This implies that CC-AM-stationarity is a weaker optimality condition than CC-M-stationarity.
The assertion then follows from Theorem 4.1. ◻ The reverse implication is not true in general as the following example shows. The following cone-continuity type condition is sufficient to bridge that gap. The following example shows that the origin, whenever it belongs to the feasible set, is always a CC-M-stationary point and satisfies CC-AM-regularity.
Borrowing terminology from [4], Theorem 4.7 proves that CC-AM-regularity is a "strict constraint qualification" in the sense that it yields the implication "CC-AM-stationarity ⟹ CC-M-stationarity". The next result shows that CC-AM-regularity is actually the weakest condition, which guarantees that CC-AM-stationary points are already CC-M-stationary. Theorem 4.9 Let x ∈ ℝ n be feasible for (1.1). Suppose that, for every continuously differentiable function f ∈ C 1 (ℝ n , ℝ) , the following implication holds:  Observe that both CC-CPLD and CC-AM-regularity do not depend on the auxiliary variable y. In contrast to this, CC-ACQ and CC-GCQ are defined using (2.2) and thus depend on both (x,ŷ) . The following implications are known from [31] for a feasible point (x,ŷ) of (2.2). For standard NLPs, it is known that AKKT-regularity implies ACQ, cf. [4,Theorem 4.4]. However, as the following example illustrates, for cardinality-constrained problems CC-AM-regularity does not even imply CC-GCQ.
Example 4.11 ([12], Example 4) We consider Then x ∶= (0, 0) T is the unique global minimizer of the problem. By Example 4.8, it also satisfies CC-AM-regularity. On the other hand, if we choose ŷ ∶= (0, 1) T , it follows from [12] that (x,ŷ) does not satisfy CC-GCQ, even though (x,ŷ) is a global minimizer of the corresponding reformulated problem.
To close this section, let us remark on the relationship between CC-AM-stationarity and another stationarity concept introduced in [12] called CC-S-stationarity. We first recall the definition of CC-S-stationarity.
As remarked in [12], CC-S-stationarity in (x,ŷ) corresponds to the KKT conditions of (2.2) and implies CC-M-stationarity of x . The converse is not true in general, see [12,Example 4]. However, if (x,ŷ) is CC-M-stationary, then it is always possible to replace ŷ with another auxiliary variable ẑ ∈ ℝ n such that (x,ẑ) is CC-Sstationary, see [21,Prop. 2.3].

Application to regularization methods
Let us consider the regularization method from [17], which was adapted for (2.2) in [20]. Let t ≥ 0 be a regularization parameter and define As it was shown in [17, Lemma 3.1], this function is continuously differentiable with

3
Sequential optimality conditions for cardinality-constrained… and ((a, b);0) is an NCP-function, i.e., ((a, b), 0) = 0 if and only if a ≥ 0, b ≥ 0, ab = 0. Now, let t > 0 be a regularization parameter. In order to relax the constraint x•y = 0 in (2.2) in all four directions, we define the following functions for all i ∈ {1, … , n}: These functions are continuously differentiable and their derivatives with respect to (x, y) can be computed using ∇ and the chain rule.
For t > 0 , we now formulate the regularized problem NLP KS (t) as (see Fig. 1) Note that our regularized problem slightly differs from the one used in [12] since we drop the constraint y ≥ 0 here and instead use two more regularization functions Φ KS 2,i and Φ KS 3,i . In the exact case, we obtain the following convergence result.
The proof of this result is similar to the inexact case, which we discuss next. Hence, we omit the details and refer to the proof of the related result in Theorem 5.3. In order to tackle the inexact case, we first need to define inexactness. Consider a standard NLP ∇ ((a, b) where all functions are assumed to be continuously differentiable. The following definition of inexactness can be found e.g. in [18, Definition 1].
Definition 5.2 Let x ∈ ℝ n and > 0 . We then say that x is an -stationary point of In the context of MPCCs, it is known that inexactness negatively impacts the convergence theory of this relaxation method, see [18]. The following result shows that this is not the case for cardinality-constrained problems.
Let us first note that {y k } is bounded. In fact, by ( Ks 6 ), we have for each i ∈ {1, … , n} that y k i ≤ 1 + k for all k ∈ ℕ , hence {y k } is bounded from above. Taking this into account and using ( Ks 5 ), i.e., n − s − k ≤ e T y k , we also get that {y k } is bounded from below. Since {y k } is bounded, it has a convergent subsequence. By passing to a subsequence, we can assume w.l.o.g. that the whole sequence converges, say {y k } →ŷ . In particular, we then have {(x k , y k )} → (x,ŷ).
Let us now prove that (x,ŷ) is feasible for (2.2). By ( Ks 3 ) -( Ks 6 ), we obviously have g(x) ≤ 0 , h(x) = 0 , n − e Tŷ ≤ s , and ŷ ≤ e . Hence, it remains to prove that x•ŷ = 0 . Suppose that this is not the case. Then there exists an index i ∈ {1, … , n} such that x iŷi ≠ 0 . W.l.o.g. let us assume x i > 0 ∧ŷ i > 0 , the other three possibilities can be treated analogously. Since {x k i + y k i } →x i +ŷ i > 0 and

3
Sequential optimality conditions for cardinality-constrained… {t k } ↓ 0 , we can assume w.l.o.g. that x k i + y k i ≥ 2t k for all k ∈ ℕ . Hence we have . From ( Ks 7 ), we then obtain x iŷi ≤ 0 for the limit, which yields a contradiction since x iŷi > 0 in this case. Altogether, we can conclude that x•ŷ = 0 and, therefore, (x,ŷ) is feasible for (2.2).
By Theorem 2.2, x is then feasible for (1.1). Now define where the left-hand side converges to 0. For all i ∈ I ± (x) we know ŷ i = 0 from the feasibility of (x,ŷ) for (2.2). Assume first that x i > 0 . Since {t k } ↓ 0 and {x k i ± y k i } →x i ±ŷ i =x i > 0 , we can assume w.l.o.g. that for all k ∈ ℕ the following is true: Using ( Ks 7 ), we obtain the following for each case: Hence, we can assume w.l.o.g. that k > 0 for each k ∈ ℕ and obtain For the case x i < 0 , we can analogously prove Putting things together, we obtain Defining for each k ∈ ℕ , we obtain and {A k } → 0 . From the structure of ∇ x Φ KS j,i , we know ∇ x Φ KS j,i ((x k , y k );t k ) ∈ span{e i } for all i ∈ I 0 (x) and all j = 1, … , 4 . Consequently, there exists ̂k i ∈ ℝ for all i ∈ I 0 (x) such that Sequential optimality conditions for cardinality-constrained… If we define ̂k i ∶= 0 for each i ∈ I ± (x) , we obtain for all k ∈ ℕ, By (5.5) and since {A k } → 0 , it then follows that x is a CC-AM-stationary point. In light of Corollary 4.10, the results obtained in this section are stronger than the result from [12], not only because we take inexactness into account, but also we only need to assume CC-AM-regularity instead of CC-CPLD.

Sequential optimality conditions for cardinality-constrained…
Hence, even if a limit point is infeasible, we have at least a stationary point of the constraint violation. In general, for the nonconvex case discussed here, we cannot expect more than this. The remaining part of this section therefore considers the case where a limit point if feasible.
A global convergence analysis of Algorithm 6.1 for cardinality-constrained problems can already be found in the [21], where the authors establish convergence to CC-M-stationary points under a problem-tailored quasi-normality condition. Here, our aim it to verify that Algorithm 6.1, at least under suitable assumptions, satisfies the sequential optimality conditions introduced in this paper. Note that this result is independent from the one in [21] since the quasi-normality condition from there is not related to our sequential regularity assumption, cf. [4] for a corresponding discussion in the NLP case.
Just like in [6, Theorem 5.1], we require the generalized Kurdyka-Łojasiewicz (GKL) inequality to be satisfied by 0,1 at a feasible limit point (x,ŷ) of Algorithm 6.1. Some comments on the GKL inequality are due. A continuously differentiable function F ∈ C 1 (ℝ n , ℝ) is said to satisfy the GKL inequality at x ∈ ℝ n if there exist > 0 and ∶ B (x) → ℝ such that lim x→x (x) = 0 and for each x ∈ B (x) [2,Page 3546], the GKL inequality is a relatively mild condition. For example, it is satisfied at every feasible point of the standard NLP (5.2) provided that all constraint functions are analytic. If we view (2.2) as a standard NLP, then all constraints involving the auxiliary variable y are polynomial in nature and therefore analytic. Thus, if the nonlinear constraints g i and h i are analytic, the GKL inequality is then automatically satisfied. In the rest of this section, we consider a feasible limit point (x,ŷ) of the sequence {(x k , y k )} generated by Algorithm 6.1 and prove that x is a CC-AM-stationary point, if the GKL inequality is satisfied by 0,1 at (x,ŷ).
Before we proceed, we would like to note that, just like in the case of the previous regularization method, the existence of a limit point of {x k } actually already guarantees the boundedness of {y k } on the same subsequence. Hence, we essentially only need to assume the convergence of {x k } and we can then extract a limit point (x,ŷ) of {(x k , y k )} . A proof of this observation is given in [21]. But for simplicity, we assume in our next result that the sequence {(x k , y k )} has a limit point.
Moreover, using (6.1) and { k } ↓ 0 , we have Now, by (S 2 ) , we also have { k } ⊆ ℝ m + . Furthermore, the sequence of penalty parameters { k } is nondecreasing and satisfies We distinguish two cases.
Case 1: { k } is bounded. Then ( S 3 ) implies k = K for all k ≥ K with some sufficiently large K ∈ ℕ . In particular, (6.2) then holds for each k ≥ K . This, in turn, implies that {‖U k ‖} → 0 . Consider an index i ∉ I g (x) . By definition, {̄k} is bounded. Thus, {̄k i k } is bounded as well and therefore has a convergent subsequence.
Assume w.l.o.g. that {̄k i k } converges and denote with a i its limit point. We therefore Now consider an i ∈ I ± (x) . By assumption, (x,ŷ) is feasible for (2.2), which implies ŷ i = 0 . By assumption, { k } is bounded. Moreover, {̄k i } is also bounded by definition. Hence, by (S 2 ) , { k i } is bounded as well, which implies { k i y k i } → 0 . Let us define ̂k ∈ ℝ n for all k ∈ ℕ by Then by (6.4) we have where the left-hand side tends to 0. This, along with (6.6), implies that x is CC-AM-stationary.
Since { ((x k , y k ))} → 0 , we conclude that { k 0,1 ((x k , y k ))} → 0 . Now consider an i ∈ I ± (x) . By the definition of 0,1 , we obviously have 0 ≤ k (x k i y k i ) 2 ≤ 2 k 0,1 ((x k , y k )) for each k ∈ ℕ . Letting k → ∞ , we then obtain { k (x k i y k i ) 2 } → 0 . Now since x i ≠ 0 , we can assume w.l.o.g. that x k i ≠ 0 for all k ∈ ℕ . Then By the feasibility of (x,ŷ) we have {y k i } →ŷ i = 0 . Since {̄k i } is bounded, we then obtain The remainder of the proof is then analogous to the case where { k } is bounded and we conclude that x is a CC-AM-stationary point. While the regularization method from Sect. 5 generates CC-AM-stationary limit points without any further assumptions, the augmented Lagrangian approach discussed here requires an additional condition (GKL in our case). A counterexample presented in [2] for standard NLPs explains that one has to expect such an additional assumption in the context of augmented Lagrangian methods. In this context, one should also take into account that the regularization method is a problem-tailored solution technique designed specifically for the solution of CC-problems, whereas the augmented Lagrangian method is a standard solver for NLPs. In general, these standard NLP-solvers are expected to have severe problems in solving CC-type problems due to the lack of constraint qualifications and due to the fact that KKT points may not even exist. Nonetheless, the results from this section show that the standard (safeguarded) augmented Lagrangian algorithm is at least a viable tool for the solution of optimization problems with cardinality constraints. We also refer to [16] for a related discussion for MPCCs.

Final remarks
In this paper, we introduced CC-AM-stationarity and verified that this sequential optimality condition is satisfied at local minima of cardinality-constrained optimization problems without additional assumptions. Since CC-AM-stationarity is a weaker optimality condition than CC-M-stationarity, we also introduced CC-AMregularity, a cone-continuity type condition, and showed that CC-AM-stationarity Next we derive an equivalent formulation of CC-AM-stationarity.
Proof "⇒ ": Assume first that x is CC-AM-stationary. We only need to prove that the corresponding sequences also satisfy conditions (c) and (d). For all i ∉ I g (x) we have g i (x k ) < 0 and k i = 0 for all k large. For all i ∈ I g (x) we have {g i (x k )} → 0 and k i ≥ 0 for all k ∈ ℕ . In bibliographystyle cases min{−g i (x k ), k i } → 0 follows and hence assertion (c) holds. To verify part (d), note that for all i ∈ I ± (x) we have k i = 0 for all k ∈ ℕ and for all i ∈ I 0 (x) we know x k i → 0 . In both cases we obtain min{|x k i |, | k i |} → 0. "⇐ ": Suppose now that there exist sequences {x k } ⊆ ℝ n , { k } ⊆ ℝ m + , { k } ⊆ ℝ p , and { k } ⊆ ℝ n such that conditions (a) -(d) hold. If we define A k ∶= ∇f (x k ) + ∇g(x k ) k + ∇h(x k ) k + k for each k ∈ ℕ , then {A k } → 0 . For all i ∉ I g (x) we know {−g i (x k )} → −g i (x) > 0 and min{−g i (x k ), k i } → 0 , which implies { k i } → 0 . For each k ∈ ℕ , define ̂k ∈ ℝ m by Then { k } ⊆ ℝ m + implies {̂k} ⊆ ℝ m + and by definition we have ̂k i = 0 for all i ∉ I g (x) and all k ∈ ℕ . Next we define For each k ∈ ℕ define ̂k ∈ ℝ n by Then clearly we have ̂k i = 0 for all i ∈ I ± (x) and all k ∈ ℕ . Now define for each k ∈ ℕ̂k k i e i = ∇f (x k ) + ∇g(x k )̂k + ∇h(x k ) k +̂k.
The converse is also true as the following result shows.
Theorem A.4 Let x ∈ ℝ n be a feasible point of (1.1). If x is a CC-AM-stationary point, then for all ŷ ∈ ℝ n such that (x,ŷ) is feasible for (A.1), it follows that (x,ŷ) is AW-stationary.

Proof
Assume that x is CC-AM-stationary. Then there exist sequences {x k } ⊆ ℝ n , { k } ⊆ ℝ m + , { k } ⊆ ℝ p , and { k } ⊆ ℝ n such that conditions (a) -(d) in Proposition A.2 hold. Now consider an arbitrary ŷ ∈ ℝ n such that (x,ŷ) is feasible for (A.1) and define for each k ∈ ℕ . Hence conditions (a) -(d) and (f) in Definition A.1 are trivially satisfied. Using the feasibility of y k =ŷ , it is easy to see that the remaining conditions also hold. Consequently, (x,ŷ) is AW-stationary. ◻ An obvious advantage of CC-AM-stationarity over AW-stationarity is that it does not depend on the artificial variable y. Hence, CC-AM-stationarity is a genuine optimality condition for the original problem (1.1). Indeed, one can even derive CC-AM-stationarity directly from (1.1) without referring to the relaxed reformulation by using the Fréchet normal cone of the set {x ∈ ℝ n | ‖x‖ 0 ≤ s}.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. y k ∶=ŷ, k ∶= 0, k ∶= 0, k ∶= 0