Newton-type methods near critical solutions of piecewise smooth nonlinear equations

It is well-recognized that in the presence of singular (and in particular nonisolated) solutions of unconstrained or constrained smooth nonlinear equations, the existence of critical solutions has a crucial impact on the behavior of various Newton-type methods. On the one hand, it has been demonstrated that such solutions turn out to be attractors for sequences generated by these methods, for wide domains of starting points, and with a linear convergence rate estimate. On the other hand, the pattern of convergence to such solutions is quite special, and allows for a sharp characterization which serves, in particular, as a basis for some known acceleration techniques, and for the proof of an asymptotic acceptance of the unit stepsize. The latter is an essential property for the success of these techniques when combined with a linesearch strategy for globalization of convergence. This paper aims at extensions of these results to piecewise smooth equations, with applications to corresponding reformulations of nonlinear complementarity problems.


Introduction
In the recent publications [10,11,17], it has been demonstrated that for smooth nonlinear equations with singular (and possibly nonisolated) solutions, the local behavior of various Newton-type methods is strongly affected by the existence of critical solutions, as defined in [18].
Specifically, [17] extends the results in [14] from the basic Newton method to perturbed versions, establishing their local convergence to a solution satisfying a certain 2-regularity property, from wide domains of starting points. This framework covers a large range of Newton-type methods, including those supplied with stabilization mechanisms, and developed especially for tackling the case of nonisolated solutions, like the Levenberg-Marquardt method [22,23] (see also [24,Chapter 10.2], and [2,26,27] for advanced local convergence theories), the LP-Newton method [5], and stabilized sequential quadratic programming for optimization [7,16,19,25] (see also [20,Chapter 7]). Under the mentioned 2-regularity property, the convergence rate of the methods in the framework is linear with a common ratio of 1/2, and cannot be any faster, whereas the 2-regularity requirement can only hold at those singular solutions that are critical.
The results of [17] were further extended in [10] to the case of constrained equations, with applications to complementarity problems through their piecewise decompositions.
The special convergence pattern of the basic Newton method established in [14] serves as a basis of some techniques for acceleration of its convergence, such as extrapolation and overrelaxation [13,15]. However, when combined with a linesearch strategy for globalization of convergence, these techniques are only useful when the basic method asymptotically accepts the unit stepsize. The latter property, which is not at all automatic near singular solutions, has been established in [10].
In this paper, we aim at obtaining the results of [10,11,17] for a much more general setting of (unconstrained and constrained) piecewise smooth equations, which further allows to treat complementarity problems directly rather than through their piecewise decompositions. We emphasize that we do not propose any new algorithms here, and therefore, do not provide numerical comparisons; we rather analyze theoretical properties of known (classes of) algorithms under nonstandard circumstances. The results in this work strongly rely on those in [10,11,17], and are obtained by combining techniques from these references with specificities of piecewise smooth structures. However, these combinations involve some rather subtle ingredients, like choosing a direction with appropriate collection of active smooth selections associated to it in Theorems 1 and 2, the role of condition (32) for globalization issues, separation of constraints into two parts in Theorem 3, etc. All these ingredients are crucial to cover a much wider territory then in [10,11,17], which is demonstrated by rather nontrivial direct (i.e., not requiring any decompositions) applications to complementarity problems in Sect. 4.
The rest of the paper is structured as follows. In Sect. 2, we consider unconstrained piecewise smooth equations. Section 2.1 contains the problem setting and the related objects and terminology, including the basic piecewise Newton method. We also discuss the requirement of 2-regularity, needed for the subsequent development, and its relations to the concept of a critical solution. In Sect. 2.2, we provide an upper estimate of the set of active smooth selections at nearby points, playing the key role for the entire paper, and discuss the related assumptions. Section 2.3 provides the result on local convergence for a large class of algorithms that can be modeled as a perturbed piecewise Newton method. Section 2.4 deals with asymptotic acceptance of the unit stepsize. Section 3 is concerned with extensions of the obtained results to the case of constrained piecewise smooth equations. Finally, in Sect. 4, we consider the applicability of the developed theory to nonlinear complementarity problems reformulated using the min complementarity function. Some words about our notation which is fairly standard. By ⟨⋅, ⋅⟩ and ‖ ⋅ ‖ we denote the Euclidean inner product and the corresponding norm, respectively, unless specified otherwise. For a given index set J, we write u J for the subvector of a vector u, with components u j , j ∈ J . Similarly, M J stands for a submatrix of a matrix M, with rows M j , j ∈ J . By I , we denote an identity matrix of a size always clear from the context. Furthermore, ker M and imM stand for the null space and the range space of a matrix (linear operator) M, respectively. Finally, R U (u) stands for the radial cone to a set U at u ∈ U , i.e., the set of directions v ∈ ℝ p such that u + tv ∈ U for all t > 0 small enough.

Problem setting and preliminaries
We consider the equation where ∶ ℝ p → ℝ p is a piecewise smooth mapping. By the latter we mean that is continuous, and there exist smooth selection mappings 1 , … , q ∶ ℝ p → ℝ p with A mapping is called smooth, if it is continuously differentiable.
For a given u ∈ ℝ p , let stand for the set of indices of all selection mappings active at u. By the continuity requirements, the set-valued mapping A(⋅) is evidently outer semicontinuous, i.e., A(u) ⊂ A(ū) holds for any ū ∈ ℝ p and all u ∈ ℝ p close enough to ū . From [6, Lemma 4.6.1] we have that is directionally differentiable at ū in any direction v ∈ ℝ p , with the directional derivative � (ū; v) , and as v → 0 . Moreover, � (ū; ⋅) is everywhere continuous, and Let G ∶ ℝ p → ℝ p×p be any mapping possessing the following property: Near a current iterate u k ∈ ℝ p , piecewise Newton-type methods rely on the following approximation of Eq. (1): Observe that there exists j k ∈ A(u k ) such that (6) can be written as which is the classical Newton method iteration system for the smooth equation According to the outer semicontinuity property of A(⋅) , for any u k close enough to some fixed solution ū of (1), it holds that j k ∈ A(u k ) implies j k ∈ A(ū) . If A(ū) is a singleton {̂ } , then the piecewise Newton method reduces to the classical one for the smooth Eq. (8) with j k =̂ . In this case, various local results obtained for smooth (constrained) equations can be easily extended to the piecewise smooth setting.
The case when A(ū) may be not a singleton is much more interesting and involved, as j k can vary with k, no matter how close the iterates are to ū . This case will be addressed below, with the main emphasis on the situation when the solution ū in question is singular in a well-defined sense implying that ( j ) � (ū) is singular for some j ∈ A(ū) . Observe that the latter is automatic when ū is a nonisolated solution of (1). As explained above, our focus will be on the results on local attraction to some special solutions of this kind for various piecewise Newton-type methods, as well as on acceptance of the unit stepsize.
In order to explain which singular solutions of Eq. (1) we are interested in, let us recall the concept of 2-regularity, in one of its equivalent forms, convenient for our purposes. Assuming that a selection mapping Observe that this property is stable subject to small perturbations of v , the fact that will be used in the discussion preceding Theorem 1. The key assumption in that theorem and other main results presented below is the existence of v ∈ ker( j ) � (ū) , v ≠ 0, such that j is 2-regular at ū in the direction v . According to [17,Proposition 1], if this assumption holds for j ∈ A(ū) , then ū is necessarily a critical solution of the equation Newton-type methods near critical solutions in a sense of [18,Definition 1]. That is why we are talking about the behavior of Newton-type methods near critical solutions.

Key construction
For any ū ∈ ℝ p and v ∈ ℝ p , define the index set Further assuming that ‖v‖ = 1 , for any > 0 and > 0 , define the set For a given solution ū of (1), let v ∈ ℝ p be such that ‖v‖ = 1 and Then there exist > 0 and > 0 such that Proof We argue by contradiction: suppose that there exist sequences { k } → 0+ , { k } → 0+ , and {u k } ⊂ ℝ p such that, for all k, it holds that u k ∈ K k , k (ū,v) , and there exists j k ∈ A(u k ) ⧵ A(ū,v) . Passing to subsequences, if necessary, we may suppose that j k = j is the same for all k. Outer semicontinuity of A(⋅) at ū yields that j ∈ A(ū) . Thus, since j ∉ A(ū,v) , (10) implies that v ∉ ker( j ) � (ū) . Hence, there exists > 0 such that for all k large enough. On the other hand, from (11) we have that as k → ∞ , and therefore, according to (3), (12), and continuity of � (ū; ⋅), contradicting (14). ◻ The following corollary of Proposition 1 is evident.

Corollary 1 Under the assumptions of Proposition
Then there exist > 0 and > 0 such that The next example shows that, in general, assumption (12) cannot be dropped in Corollary 1, and hence, in Proposition 1.
The converse implication is valid assuming that A(ū,v) consists of a single index ̂ . In this case, the inequality in (16) with j =̂ holds as equality for > 0 and > 0 small enough. This is an immediate consequence of Corollary 1. The next example demonstrates that when A(ū,v) is not a singleton, condition (12) may hold when (16) does not.

Attraction to critical solutions
We first present a result on local attraction of the perturbed piecewise Newton method to a solution which is critical with respect to some smooth selection mappings active at this solution. The essence of this result is as follows: Proposition 1 allows to show that the perturbed piecewise Newton method fits the algorithmic framework of [17,Theorem 1] when this framework is applied to the smooth equations corresponding to an appropriate collection of active smooth selections.
For given ū ∈ ℝ p and v ∈ ℝ p , assuming that N = ker( j ) � (ū) is the same for all j ∈ A(ū,v) , we will use the uniquely defined decomposition of every u ∈ ℝ p into the sum u = u 1 + u 2 , with u 1 ∈ N ⊥ and u 2 ∈ N . Let Π be the orthogonal projector onto N in ℝ p .
The assumption that the null spaces of ( j ) � (ū) coincide for all j ∈ A(ū,v) may seem restrictive, while in fact it is not, in the following sense. Suppose that for a given v , there exist j 1 , . Therefore, there must exist v ∈ ℝ p such that it belongs, say, to the first of these null spaces but not to the second. Then, for any real t close enough to 0, it holds that . By (4), we then have that, in particular, A(ū,v + t� v) cannot contain both indices j 1 and j 2 simultaneously. Continuing this procedure with v replaced by v + t� v , we end up with v such that either A(ū,v) is a singleton, or the null spaces of ( j ) � (ū) coincide for all j ∈ A(ū,v) . Moreover, this v can be taken arbitrarily close to the original one, and therefore, the 2-regularity properties in the original direction will be preserved for this v.
For a given solution ū of (1), let v ∈ ℝ p with ‖v‖ = 1 be satisfying the following requirements: with its second derivative being Lipschitz-continuous with respect to ū , that is, Then, for any G ∶ ℝ p → ℝ p×p satisfying (5), any ∶ ℝ p → ℝ p×p and ∶ ℝ p → ℝ p satisfying as u →ū , and any � > 0 and � > 0 , there exist > 0 and > 0 such that for every starting point u 0 ∈ K , (ū,v) , there exists the unique sequence {u k } ⊂ ℝ p such that for each k, the iterate u k+1 solves and for this sequence, and for each k, it holds that u k Proof According to Proposition 1, if � > 0 and � > 0 are small enough, then for all u k ∈ K �, � (ū,v) , Eq. (21) coincides with for some j ∈ A(ū,v) , defining a perturbed Newton method step for the smooth Eq. (9). The proof in [17, Theorem 1] does not depend on the index j of this equation. Rather, it relies on the existence of the perturbed Newton method step v k for this equation at u k ∈ K �, � (ū,v) with sufficiently small � > 0 and � > 0 , and on the description of v k in the form of the estimates on v k 1 and v k 2 , i.e., the properties established in [17,Lemma 1]. Note that u = u k + v k solves Eq. (24). It remains to observe that, under the stated assumptions, this lemma is applicable to Eq. (9) and the corresponding iteration Eq. (24) for every j ∈ A(ū,v) , and since this index set is finite, the needed estimates on v k 1 and v k 2 can be considered the same for all j ∈ A(ū,v) . Then the assertion in [17,Theorem 1] gives the desired conclusions. ◻

Remark 2
According to [17,Lemma 1], the iterates in Theorem 1 also satisfy as u k →ū , an estimate that will be needed below.
. Therefore, all assumptions of Theorem 1 are satisfied with this v , except for (12).
Example 2 (continued) This example demonstrates the situation when Theorem 1 is applicable with A(ū,v) not being a singleton. Indeed, one can easily see that, say, for v = 1 , requirement 1 of Theorem 1 is satisfied (with N = ℝ ), while (12) in requirement 2 has already been demonstrated above.
Numerical experiments show that switching between different smooth selections is typical for iterates generated by solving (6).
As explained in [17], the perturbation terms and serve to cover various specific Newton-type methods within the general framework (21). We next mention several such methods available for the piecewise smooth setting. To begin with, as already mentioned above, taking (⋅) ≡ 0 and (⋅) ≡ 0 yields the basic piecewise Newton method (6).
Subproblem (6) can be replaced by the following unconstrained convex optimization problem: which is always solvable (unlike (6)), though the solution set can be a non-singleton affine manifold. The family of algorithms employing this subproblem (e.g., with specific rules for choosing a solution of (26) when it is not unique, like picking up the solution minimizing ‖u − u k ‖ ) can be referred to as the piecewise Gauss-Newton method.
A stabilized version of both the piecewise Newton method and the piecewise Gauss-Newton method is the piecewise Levenberg-Marquardt method [4,8]. It generates the next iterate as the solution of the subproblem where ∶ ℝ p → ℝ + defines the regularization parameter. If (u k ) > 0 , the objective function of this subproblem is strongly convex quadratic, and in particular, the subproblem has a unique solution. A typical choice is (u) ∶= ‖ (u)‖ with some > 0 . For our purposes here, we require that ≥ 2 . Since ‖ ⋅ ‖ denotes the Euclidean norm, solving (27) is the same as solving the linear equation A different stabilizing construction is the piecewise LP-Newton method [5]. In our setting, its subproblem has the form with variables (u, ) ∈ ℝ p × ℝ . If ‖ ⋅ ‖ denotes the l ∞ -norm, then this is a linear program (hence the name of the method). It can be easily seen that (29) is always solvable, unless u k is a solution of (1), but a solution of (29) need not be unique, and it is assumed that an arbitrary solution can be picked up.
By considerations similar to those in [17], it follows that under the assumptions of Theorem 1, all these methods can be interpreted through the perturbed piecewise Newton method framework (21). Specifically, if u k ∈ K �, � (ū,v) with sufficiently small � > 0 and � > 0 , then u k+1 produced by these methods satisfies (21) with appropriate choices of and (sometimes not defined explicitly). This implies that the conclusions of Theorem 1 are valid for all these methods.

Acceptance of the unit stepsize
Our next result is concerned with acceptance of the unit stepsize by the piecewise Newton method supplied with a natural linesearch procedure, near a solution which is critical with respect to some active smooth selection mappings. Again, the key tool is Proposition 1 allowing to show that this method fits [11,Proposition 3] applied to proper active smooth selections. To that end, we state the following model algorithm.
Proof The reasoning is essentially the same as for Theorem 1, but with [17, Lemma 1] replaced by [11,Lemma 1], and with [17, Theorem 1] replaced by [11,Proposition 3]. One should also note that since A(ū,v) is finite, all the constants arising along the way of proving [11,Proposition 3] can be chosen the same for all j ∈ A(ū,v) . ◻ Apart from playing a key role in establishing that Algorithm 1 can be expected to inherit the linear convergence rate of the piecewise Newton method, Theorem 2 is essential in this context for justification of known techniques for acceleration of convergence, such as extrapolation and overrelaxation; see [13,15] and [11,12].

Problem setting
We now consider the constrained version of Eq. (1), namely, where U ⊂ ℝ p is a closed convex set. Such constraints can be exogenous by nature (e.g., when solutions of the unconstrained equation make physical sense only if they satisfy these constraints, like nonnegativity restrictions on the components of u representing quantities), or be intrinsic ingredients of the problem setting (e.g., in some reformulations of complementarity conditions). On the other hand, artificially imposing relevant constraints can be essential for justification of strong local convergence properties of Newton-type methods [5,8], as well as for globalization of their convergence.
Even though Theorem 2 is valid and characterizes an important local feature of Algorithm 1, the linesearch procedure in this algorithm does not make much sense, in general, as the direction v k of the piecewise Newton method does not need to be a direction of descent for ‖ (⋅)‖ at u k . However, suppose that there exists U ⊂ ℝ p with the following property: (cf. (16)). This important kind of piecewise smoothness has already been considered in [9, (4.8)]. Since there exists j k ∈ A(u k ) such that u k + v k solves (7), and assuming that ‖ j k (u k )‖ = ‖ (u k )‖ ≠ 0 , we obtain in a standard way that v k is a direction of descent for ‖ j k (⋅)‖ at u k , where this function is smooth with its gradient at u k equal to (( j k ) � (u k )) ⊤ j k (u k )∕‖ j k (u k )‖ . Assuming now that u k ∈ U , and v k is a feasible direction for U at u k (which is of course not automatic, and has to be ensured by appropriate modifications of Algorithm 1; see below), from (32) we have that for all > 0 small enough.
Employing Proposition 1, one can readily extend Theorems 1 and 2 to the constrained case, along the lines of [10]. Specifically, under the additional assumption v ∈ int R U (ū) , one can claim that in these theorems > 0 and > 0 can be taken small enough so that the sequence {u k } in question entirely belongs to U.
However, as will be demonstrated in Sect. 4, such results cannot be applied directly to, say, constrained reformulations of complementarity problems, because the requirement v ∈ int R U (ū) appears too restrictive for the choices of U relevant in that context, in the absence of strict complementarity. To that end, in the next Sect. 3.2, we will assume that the interiority assumption on v holds only for a part of constrains, while the other constraints are observed by the iteration of the piecewise Newton method itself. Afterwards, we will show how this framework, being combined with the unconstrained Theorem 1, allows to cover various implementable Newton-type methods intended for solving the constrained problem (31). In Sect. 4, all the developed machinery will be applied to complementarity problems.

Main result for the constrained case
Combining Theorem 1 with [10, Lemma 3.1], we obtain the following constrained counterpart of that theorem, but concerned with the basic piecewise Newton method only. Later in this section, we demonstrate how this result, combined with Theorem 1 for the unconstrained case, allows to cover various Newton-type methods for solving the constrained Eq. (31).
satisfying the following requirements: 1. For every j ∈ A(ū,v) , the selection mapping j is twice differentiable near ū , (17) holds as u ∈ U tends to ū , ker( j ) � (ū) = N , where the linear subspace N does not depend on j, and is 2-regular at ū in a direction v. 2. Condition (12) holds. 3. It holds that v ∈ int P. 4. There exist � > 0 and � > 0 such that for every j ∈ A(ū,v) , and for any u k ∈ K � � (ū,v) ∩ U , any solution of the equation belongs to Q. Then, for any G ∶ ℝ p → ℝ p×p be satisfying (5), and any � > 0 and � > 0 , there exist > 0 and > 0 such that for every starting point u 0 ∈ K , (ū,v) ∩ U , there exists the unique sequence {u k } ⊂ ℝ p such that for each k, the iterate u k+1 solves (6), and for this sequence, and for each k, it holds that u k 2 ≠ū 2 , u k ∈ K �, � (ū,v) ∩ U , {u k } converges to ū , {‖u k −ū‖} converges to zero monotonically, and estimates (22) and (23) hold.
Similarly to Remark 1, one can see that requirement 2 in Theorem 3 is automatically satisfied if with some > 0 and > 0 . Observe that (33) holds if (32) is valid.
Under the assumptions of Theorem 3, one can also apply Theorem 2 in order to establish asymptotic acceptance of the unit stepsize; there is no need to state this result separately in the constrained case.
We now consider some implementable Newton-type methods for problem (31). Given a current iterate u k ∈ U , the next iterate u k+1 of the constrained piecewise Gauss-Newton method is defined by solving the subproblem The objective function of this subproblem is convex quadratic, and if U is polyhedral, (34) always has a solution due to the Frank-Wolfe theorem [3, Theorem 2.8.1], but a solution of this subproblem needs not be unique, in general. However, under the assumptions of Theorem 3, we evidently obtain that, when initialized appropriately, the piecewise Gauss-Newton method uniquely defines the iterative sequence coinciding with the sequence of the unconstrained piecewise Newton method, and hence, inherits all the properties of the latter, specified in Theorem 1.
An alternative possibility is to solve the unconstrained piecewise Newton iteration system (6) for u k N , or the unconstrained piecewise Gauss-Newton subproblem (26) for u k GN , and then define u k+1 as the projection of u k N or u k GN onto U. This yields the projected piecewise Newton method and the projected piecewise Gauss-Newton method, respectively. Recall that neither existence nor uniqueness of solutions of (6) can be guaranteed without further assumptions, while (26) is always solvable, but its solution set can be a non-singleton affine manifold. Yet again, under the assumptions of Theorem 3, we readily get the same conclusions as for the constrained piecewise Gauss-Newton method: all the three methods behave identically to the unconstrained piecewise Newton method, when initialized appropriately.
The stabilized version of the constrained piecewise Gauss-Newton method is the constrained piecewise Levenberg-Marquardt method [21] with the following constrained counterpart of subproblem (27): Assuming that (u k ) > 0 (which holds automatically for typical choices of (⋅) discussed above, provided the current iterate u k ∈ U is not a solution of (31)), this subproblem has the unique solution u k+1 . According to Theorem 3, if � > 0 and � > 0 are small enough, then the inclusion u k ∈ K �, � (ū,v) implies the existence of the unique u k N solving (6), and this u k N belongs to U. In particular, it is feasible in (35), and therefore, implying that
The constrained piecewise LP-Newton method is a natural extension of (29) to the constrained setting, and in fact, the method was originally introduced in [5] precisely for the constrained setting, with the subproblem with variables (u, ) ∈ ℝ p × ℝ . For u k N defined as above, we then have that (36), and hence, the optimal value (u k ) of this problem satisfies (u k ) ≤ ‖u k N − u k ‖∕‖ (u k )‖ . Similarly to (29), subproblem (36) is always solvable, and for any solution u k+1 it holds that This leads to the same conclusions for the constrained piecewise LP-Newton method as those derived above for the constrained piecewise Levenberg-Marquardt method.

Applications to complementarity problems
Consider now the nonlinear complementarity problem (NCP) with a smooth mapping F ∶ ℝ n → ℝ n .

Unconstrained reformulation
Setting u ∶= x (and p ∶= n ), problem (37) is equivalent to Eq. (1) with the mapping ∶ ℝ n → ℝ n defined by where the min-operation is applied componentwise. Let q ∶= 2 n , and fix any one-to-one mapping j ↦ I(j) from {1, … , q} to the set of all different subsets of {1, … , n} (including ∅ and the entire {1, … , n} ). Then the mapping defined in (38) is piecewise smooth, and the corresponding smooth selection mappings j ∶ ℝ n → ℝ n have the components Therefore, for a given u ∈ ℝ n , the set of indices of active selection mappings defined according to (2) takes the form where is a natural partitioning of the index set {1, … , n} . Hence, the requirement (5) on the choice of a mapping G ∶ ℝ n → ℝ n×n can be written for the rows G i (u) of G(u) as follows: where e i = (0, … , 0, 1, 0, … , 0) , with 1 at the i-th place, i = 1, … , n.
With these objects defined, in order to solve NCP (37), one can apply the methods discussed in Sect. 2.3 to Eq. (1) with defined in (38).
We proceed with deriving conditions allowing to apply Theorem 1 in this context. Observe that if x is a solution of (37), then For brevity, we will use the notation ⧵J ∶= I = ⧵ J.
The following is [20,Proposition 3.21]; we state it here for convenience of references below. Lemma 1 Let x be a solution of (37) with F ∶ ℝ n → ℝ n being differentiable at x , and consider any ̄∈ ℝ n , ̄≠ 0.
Then condition (12) is satisfied for defined in (38), ū ∶=x , v ∶=̄∕‖̄‖ , if and only if ̄ is a solution of the linear complementarity system Proposition 2 Let x be a solution of (37) with F ∶ ℝ n → ℝ n being twice differentiable near x , with its second derivative being Lipschitz-continuous with respect to x . Assume that for some collection of index sets J r ⊂ I = , r = 1, … , s , and for some ̄∈ ℝ n , ̄≠ 0 , the following properties are satisfied:

It holds that
(42) It is easy to see that using the definitions in (48), the set N in (44) can be written as Indeed, J r ∪I < = 0 holds for all r = 1, … , s if and only if J ∪ ∪I < = 0 , and taking into account these equalities, for all r = 1, … , s . It remains to observe that, according to (48), Proof According to (39) and (40), for any J ⊂ I = and the associated j ∈ A(x) with I(j) = J ∪ I < , we have that (after the appropriate re-ordering of rows and columns)
For both r = 1 and r = 2 , (45) takes the form which may only hold with = 0 if we assume that Finally, the index sets in (48) take the form J ∪ = {2} , J ∩ = � , and condition (47) transforms into ̄1 ≥ 0 , and hence, all the assumptions of Proposition 2 are satisfied with ̄= (1, 0) [but not with (−1, 0)]. Figure 2 demonstrates some iterative sequences generated by the piecewise Newton method, for (x) = (x 1 x 2 , x 2 1 − x 2 2 ) , and with (41) used with J = I = (u) . In Fig. 2a-f, sequences are initialized within any of six domains such that only one of smooth selections active at x = 0 remains active on the interior of a given domain, and thick black lines show the boundaries of these domains. Figure 2a demonstrates the convergence pattern specified in Theorem 1, from the convergence domain associated with v = (1, 0) , with iterative sequences converging linearly, with the common contingent direction v at x . In Fig. 2c-e, the convergence patterns are quite different, with superlinear convergence rate in Fig. 2c and e, and with one-step termination at Fig. 2d. Finally, in Fig. 2b and f, the first step changes the active selection, and then inherits the convergence pattern of a new one.
The statement of Proposition 2 can be somehow simplified by assuming that s = 1 , i.e., the corresponding A(ū,v) is a singleton.

Corollary 2 Let
x be a solution of (37) with F ∶ ℝ n → ℝ n being twice differentiable near x , with its second derivative being Lipschitz-continuous with respect to x . Assume that for some � J ⊂ I = , and for some ̄∈ ℝ n , ̄≠ 0 , it holds that and there exists no ∈ ℝ n , with ⧵ � J∪I > ≠ 0 , satisfying As demonstrated in the proof above, under (54), condition (55) is sufficient for (58). However, one cannot just replace (55) in Corollary 2 by a weaker assumption (58), as one will have to additionally assume that in order to ensure (47) in Proposition 2. Moreover, under (54) and (60), it can be verified that the converse implication is valid as well, i.e., (55) and (58) are actually equivalent, and therefore, any improvement in the statement of Corollary 2 cannot be achieved this way.
Example 1 (continued) Observe that the mapping in this example agrees with (38) for the NCP (37) with F(x) = x 2 . We have I > = I < = � , and one must take Ĵ = � (corresponding to ̂ = 2 ) in order to satisfy (54) with some ̄≠ 0 . Then (58) holds trivially with any ̄≠ 0 , but (60), and hence (55), are violated for ̄< 0 . Observe (56) further that all the other assumptions of Proposition 2 are satisfied, but as demonstrated above, its assertion is not valid with such ̄.
Proposition 2 combined with Theorems 1 and 2 ensure that all the properties of the algorithms discussed in Sects. 2.3 and 2.4 remain valid when these algorithms are applied with defined by (38), with any G ∶ ℝ n → ℝ n×n satisfying (41), and for appropriate choices of v.
Example 1 (continued) This example also demonstrates that in the context of Corollary 2, the claim above is not valid with assumption (55) replaced by (58). -For Ĵ = {2} , we have that (54) holds with any ̄∈ ℝ 2 satisfying ̄2 = 0 , but the second inequality in (55) cannot hold for any ̄ . Therefore, Proposition 2 is not applicable with this choice of Ĵ . -For Ĵ = � , we have that (54) holds trivially with any ̄∈ ℝ 2 , (55) reduces to the requirement ̄2 > 0 , and implying that (56)-(57) cannot hold with any nonzero ∈ ℝ 2 provided ̄1 ≠ 0 . Therefore, Corollary 2 is applicable with this choice of Ĵ , and with any ̄∈ ℝ 2 satisfying ̄1 ≠ 0 , ̄2 > 0. In Figs. 3, 4 and 5, the horizontal and vertical lines form the solution set. These figures show some iterative sequences generated by the piecewise Newton method, the piecewise Levenberg-Marquardt method, and the piecewise LP-Newton method, and the domains from which convergence to x was detected. We also show the curves where the activity of different smooth selections of changes. The observed behavior agrees with considerations above.

3
Newton-type methods near critical solutions implying that (30) holds with = 1 for v k =ũ k+1 − u k provided ≤ 3∕4 . Therefore, for such , Step 3 of Algorithm 1 accepts the unit stepsize, and hence, Step 4 produces u k+1 1 =ũ k+1 1 . In particular, one can readily check that the requirements u k+1 1 ≠ 1 and u k+1 2 > (u k+1 1 − 1) 2 remain valid, as for the previous iterate u k . This implies that considerations above apply to all subsequent iterations, and hence, Algorithm 1 initialized at u 0 ∈ ℝ 2 close enough to x , and such that u 0 1 ≠ 1 , u 0 2 > (u 0 1 − 1) 2 , generates the sequence {u k } by full piecewise Newton method steps, and all such sequences converge linearly to x . This behavior is illustrated by Figure 3a.

Constrained reformulation
In order to introduce a reasonable constrained reformulation of NCP (37), we first need to reformulate this problem using slack variable y ∈ ℝ n : Observe that x is a solution of NCP (37) if and only if (x, F(x)) is a solution of (61).
Setting u ∶= (x, y) (and p ∶= 2n ), we will consider a constrained reformulation (31) of NCP (37), with ∶ ℝ n × ℝ n → ℝ n × ℝ n defined by and with U ∶= ℝ n + × ℝ n + , where Ψ ∶ ℝ n × ℝ n → ℝ n is defined by The mapping defined by (62)-(63) is piecewise smooth, and the corresponding smooth selection mappings j ∶ ℝ n × ℝ n → ℝ n × ℝ n are given by where the mappings Ψ j ∶ ℝ n × ℝ n → ℝ n have the components It can be easily verified that for the objects defined above, condition (32) is satisfied. At the same time, Example 1 demonstrates that condition (32) with U = ℝ n × ℝ n (i.e., in the unconstrained case) is violated for defined in (38) or (62). This demonstrates one of the roles of constraints in this reformulation, which are redundant for the reformulation itself.
For a given u ∈ ℝ n × ℝ n , the set of indices of active selection mappings defined according to (2) has the same form (40) as above, but with Therefore, in order to satisfy (5), the mapping G ∶ ℝ n × ℝ n → ℝ 2n×2n must be of the form where the rows of Γ 1 ∶ ℝ n × ℝ n → ℝ n×n and Γ 2 ∶ ℝ n × ℝ n → ℝ n×n satisfy Observe that for a given solution x of NCP (37), and for ū = (x, F(x)) , the index sets I > = I > (x) , I = = I = (x) , and I < = I < (x) , defined in Sect. 4.1, coincide with I > (ū) , I = (ū) , and I < (ū) , respectively. Proof According to (40), and (64)-(65), for any J ⊂ I = and the associated j ∈ A(ū) with I(j) = J ∪ I < , we have that (after the appropriate re-ordering of rows and columns) and in particular for some J ⊂ I = (u).
In conclusion of this section, we emphasize that the material presented in it can be readily extended from NCPs to more general complementarity systems, which, in particular, would allow to cover Example 2. We do not provide this extension here, in order to avoid extra technicalities.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.