Second order semi-smooth Proximal Newton methods in Hilbert spaces

We develop a globalized Proximal Newton method for composite and possibly non-convex minimization problems in Hilbert spaces. Additionally, we impose less restrictive assumptions on the composite objective functional considering differentiability and convexity than in existing theory. As far as differentiability of the smooth part of the objective function is concerned, we introduce the notion of second order semi-smoothness and discuss why it constitutes an adequate framework for our Proximal Newton method. However, both global convergence as well as local acceleration still pertain to hold in our scenario. Eventually, the convergence properties of our algorithm are displayed by solving a toy model problem in function space.


Introduction
Subject of this work is to generalize the idea of Proximal Newton methods for composite objective functions to a Hilbert space setting, aiming for the efficient solution of non-convex, non-smooth variational problems.The optimization problem reads min x∈X F (x) := f (x) + g(x) (1) where f : X → R is assumed to be smooth in some adequate sense and g : X → R is possibly not.The domain of both f and g is given by a subset of an arbitrary Hilbert space X.
Originally, Fukushima and Mine introduced the Proximal Gradient method in the Euclidean R n for optimization problems of the above form, cf.[8].More specifically, this early version of the Proximal Gradient method constitutes a special case of a procedure studied by Tseng and Yun, cf.[27].Further research showed that variously defined line search techniques lead to global convergence of the algorithm even under appropriate inexactness conditions for the solutions of the subproblem for step computation, cf. for example [3,7,9,15,22,24].Additionally, local acceleration results have been achieved by utilizing second order information of the smooth part close to optimal solutions of the original minimization problem.
Obviously, further assumptions on the form of the composite objective functional open the door to more specific adaptions of the solution algorithm.For example in [6,17,25], the authors assume convexity and self-concordance of the smooth part f in order to employ damped Proximal Newton methods.Alternatively, reformulations of the original minimization problem can be useful.As a consequence, methods which have been proven to work for other problem classes can also be applied in our case.For example in [4,5,18] fixed point algorithms were employed or consider [1] for a reformulation of (1) as a constrained problem.
A different point of view onto this class of problems was taken by Milzarek and Ulbrich in [20].For g(x) := λ x 1 with λ > 0, they considered a semi-smooth Newton method with filter globalization which Milzarek later on generalized to work also for arbitrary convex functions for g, cf.[19].
Recently, Kanzow and Lechner discussed a globalized, inexact and possibly nonconvex Proximal Newton-type method in Euclidean space R n , cf. [13].There, the algorithm resorted to Proximal Gradient steps in the case of insufficient descent together with a line-search procedure in order to achieve global convergence and cope with lacking convexity of the objective functional.The work of Lee and Saunders [16] gives an instructive overview of a generic version of the Proximal Newton method as well as several convergence results.Our contributions beyond [16] can be summarized as follows: Most obviously, we generalize the Euclidean space setting to a Hilbert space one.Additionally, in [16] only elliptic bilinear forms for the second order model are considered and the non-smooth part g is required to be convex.We use a more general framework of convexity assumptions for the composite objective function F .Furthermore, we do not demand second order differentiability with Lipschitz-continuous second order derivative of the smooth part f but instead settle for adequate semi-smoothness assumptions.We replace the simple line-search approach for globalization with a more sophisticated proximal arc-search method which additionally softens the convexity assumptions on the objective functional.Eventually, we establish a more refined version of the global convergence proof and also give a dual interpretation for the stopping criterion of the algorithm.To our knowledge, also the notion of second order semi-smoothness for f is yet to appear in literature.On the other hand, our work here covers neither inexact nor Proximal Quasi-Newton methods.
An important practical aspect of splitting methods, such as Proximal Newton, is that the non-smooth part g of the composite objective functional F yields a proximity operator prox g that can be evaluated easily.This is, for example, the case, if g and also the employed scalar product have diagonal structure.Then the solution of the subproblem within the proximity operator can be computed cheaply in a componentwise fashion.In function space problems, in particular if Sobolev spaces are involved, it is known that instead of a diagonal structure, a multi-level structure should be used in order to reflect the topology of the function space properly.Diagonal proximal operators would suffer from mesh-dependent condition numbers.In our numerical computations we therefore employ non-smooth multi-grid techniques to compute the Proximal Newton steps, in particular Truncated Non-smooth Newton Multigrid Methods, cf.[10].
Let us first specify the setting in which we will discuss the convergence properties of Proximal Newton methods in a real Hilbert space (X, •, • X ) with corresponding norm v X = v, v X and dual space X * .The Hilbert space structure of X also gives us access to the Riesz-Isomorphism R : X → X * , defined by Rx = x, • X , which satisfies Rx X * = x X for every x ∈ X.Since R is non-trivial in general, we will not identify X and X * .
We will assume the smooth part of our objective functional f : X → R to be continuously differentiable with Lipschitz-continuous derivative f ′ : X → X * , i.e., we can find some constant L f > 0 such that for every x, y ∈ X the estimate holds.
Next we will specify our assumptions on the second order model for f .In what follows, we will notationally identify the linear operators Hx ∈ L(X, X * ) with the corresponding symmetric bilinear form Hx : X ×X → R, and write (Hxv)(w) = Hx(v, w), using the abbreviation Hx(v) 2 = Hx(v, v).We will assume uniform boundedness of Hx along the sequence (x k ) of iterates: In addition, along the sequence of iterates x k we assume a uniform bound of the form For κ 1 > 0 estimate (3) represents ellipticity of Hx with constant κ 1 .When considering exact (and smooth) Proximal Newton methods, where Hx is given by the second-order derivative of f at some point x ∈ X, (3) is equivalent to κ 1 -strong convexity of f .In the case κ 1 > 0 we may also define an energy-norm and write: For most of the paper we may choose Hx freely in the above framework.For fast local convergence, however, we will impose a semi-smoothness assumption, cf.(15).Semi-smooth Newton methods in function space have been discussed, for example, in [12,23,28,29].Furthermore, in order to guarantee transition of our globalization scheme to fast local convergence, we suppose f to suffice the notion of second order semi-smoothness (cf.Section 5) which generalizes second order differentiability in our setting and the definition of which slightly differs from semi-smoothness of f ′ in (15).
We assume that the non-smooth part g is lower semi-continuous and satisfies a bound of the form for all x, y ∈ X and all s ∈ [0, 1] for some κ 2 ∈ R. For κ 2 > 0 estimate (4) represents κ 2 -strong convexity of g.It is known that κ 2 -strong convexity of g implies that g is bounded from below, its level-sets Lαg bounded for all α ∈ R and their diameter shrinks to 0, if α → inf x∈X g.In the case of κ 2 < 0, g is allowed to be non-convex in a limited way.
The theory behind Proximal Newton methods and the respective convergence properties evolves around the convexity estimates stated in (3) and ( 4).We will assign particular importance to the interplay of the convexity properties of f and g, i.e., the sum κ 1 + κ 2 will continue to play an important part over the course of the present treatise.
Let us now shortly outline the structure of our work: In Section 3 we will consider undamped update steps computed as the solution of an adequately formulated subproblem.These can also be represented using (scaled) proximal mappings the definition and key properties of which we shortly address.Afterwards, local superlinear convergence of the Proximal Newton method is shown.In Section 4 we present a modification of the aforementioned subproblem in order to damp update steps and globalize the Proximal Newton method.This enables the proof of optimality of all limit points of the sequence of iterates generated by our method.Section 5 concerns the introduction of second order semi-smoothness for f and showcases how it helps to verify the admissibility of both full and damped update steps sufficiently close to optimal solutions in Section 6.This in turn enables local fast convergence of our globalized method.In Section 7 the performance of our algorithm is substantiated by numerical results.
As a start, we want to introduce the definition of undamped update steps and investigate the behavior of the ensuing Proximal Newton method close to optimal solutions of problem (1).

General Dual Proximal Mappings
We compute a full step for the Proximal Newton method at a current iterate x ∈ X by solving the subproblem In this section Hx denotes a general bilinear form, as introduced above.If a minimizer exists, we determine the next iterate via x + := x + ∆x.We will consider this update scheme and investigate its convergence properties close to optimal solutions, and in particular fast local convergence if Hx is adequately chosen as a so-called Newton derivative from ∂ N f ′ (x), also known as the generalized differential ∂ * f ′ (x) in the sense of Chapter 3.2 in [29].
) admits a unique solution.
Proof By assumption, the functional to be minimized is lower semi-continuous, and 0 implies that it is strictly convex as well as radially unbounded.Since X is a Hilbert space a minimizer exists and is unique.

⊓ ⊔
Remark 1 Let us shortly elaborate on both constants κ 1 and κ 2 as well as the assumption κ 1 + κ 2 > 0. While κ 2 is a global convexity constant for g, κ 1 is a purely local quantity which differs from iterate to iterate together with the corresponding second order bilinear form Hx k .This has two immediate consequences: On the one hand, ellipticity of the second order bilinear forms can locally compensate for non-convexity of g and on the other hand (global) convexity of g enables us to locally use non-elliptic Hx even close to optimal solutions of our minimization problem.Comparing these convexity assumptions to similar works on the topic, we recognize that the authors in both [16] and [13] require ellipticity of their ∇ 2 f (x * ) in addition to convexity of g.In contrast, our (κ 1 , κ 2 )-formalism from above suitably quantifies the contribution to convexity of both f and g.
For the following discussion we keep the assumption κ 1 + κ 2 > 0. To introduce an adequate definition of a proximal mapping in Hilbert space we reformulate (5) directly for the updated iterate x + via In the literature existence of a continuous inverse H −1 x : X * → X is frequently assumed, giving rise to a mapping H −1 x f ′ : X → X.Then ( 6) can be rearranged to In [16], this form of the updated iterate is considered and the notion of a proximal mapping is introduced by such that there (7) takes the form However, in this work we want to follow a different, more direct approach towards proximal mappings which allows us to use the structure of the dual space X * more accurately and dispense with an invertibility assumption on Hx.In [25] (scaled) proximal mappings are introduced for X = R n according to Observing that x T represents a dual element in R n here, we generalize this notion to the setting of Hilbert spaces and consider obtaining a mapping from the dual space back to the primal space.With this definition in mind, (6) can directly be rewritten as Our notion allows us to dispense with the use of the inverse H −1 x , which would require in addition κ 1 > 0. We will refer to (8) as the direct or dual formulation of scaled proximal mappings.First order conditions for the minimization problem posed in (9) yield the equation coincides with the convex subdifferential ∂g, cf.[14]).As we rearrange this identity, one could formally write: If Hx is additionally invertible, this is equivalent to which once again substantiates the interpretation of proximal-type methods as forwardbackward splitting algorithms.Note that in particular the subdifferential of g is evaluated at the updated point x + .We can shift convexity properties of the respective parts of the composite objective functional by inserting adequate bilinear form terms.However, this procedure does not affect the sequence of iterates generated by the update formula from above: Lemma 1 Let q : X → R be a continuous quadratic function and denote its second derivative (which is independent of x) by Q := q ′′ (x) : X → X * .Consider the modified (but obviously equivalent) minimization problem Then, the update steps computed via (9) are identical for both problems (1) and (10) if we choose Hx = Hx − Q as the corresponding bilinear form.
Proof The only claim which is not apparent is the identity of update steps.To this end, we consider the fundamental definition of the update step for problem (10) at some x ∈ X given by ∆x = argmin δx∈X f′ (x)δx + 1 2 Hx(δx) 2 + g(x + δx) − g(x) and consequently for q(y) = 1 2 Q(y which directly shows the asserted identity of update steps.

⊓ ⊔
Remark 3 If the bilinear form for update step computation is chosen as Hx ∈ ∂ N f ′ (x) and thereby as Hx ∈ ∂ N f′ (x) in the modified case, we have Hx = Hx − Q, automatically.

Regularity and Fast Local Convergence
The representation of the updated iterate as the image of a scaled proximal mapping in (9) will turn out to be very useful in what follows which is why we dedicate the next two propositions to the properties of scaled proximal mappings in our scenario.The first proposition generalizes the assertions of the so called second prox theorem, cf.e.g.[2], to our notion of proximal mappings.
Then for any ϕ ∈ X * the image of the corresponding proximal mapping u := P H g (ϕ) satisfies the estimate Proof The proof of the estimate above is an easy consequence of the characterization of the convex subdifferential of g H := g + 1 2 H(•, •) and (4).First order conditions of the minimization problem in (8) yield where ∂ denotes the convex subdifferential since in particular g H is convex due to the positivity of the sum κ 1 + κ 2 .This inclusion directly implies the estimate H(y, y) for arbitrary y ∈ X which is equivalent to As pointed out before, now we want to take advantage of the convexity assumptions on g according to (4).To this end, we insert y = y(s) := sξ + (1 − s)u above for s ∈]0, 1] and use (4) on the right-hand side.This yields where we now divide by s = 0 and subsequently evaluate the limit of s to zero.This procedure provides us with the asserted estimate for ξ, ϕ and u as specified above.⊓ ⊔ The inequality from Proposition 2 can be used in order to prove several useful continuity results for general scaled proximal mappings in Hilbert spaces.However, for our purposes it suffices to assert and verify the following result, which generalizes nonexpansivity of proximal mappings in Euclidean space to our setting.It plays a similar role as boundedness of the inverse of the derivative in Newton's method.
Corollary 1 (Regularity of the Prox-Mapping) Let H and g satisfy the assumptions (3) and (4) with κ 1 + κ 2 > 0.Then, for all ϕ 1 , ϕ 2 ∈ X * the following Lipschitzestimate holds: Proof Let us choose H and ϕ 1 , ϕ 2 as stated above.According to Proposition 2, the first order conditions for the respective minimization problems yield the inequalities since we can choose ξ := u 2 or ξ := u 1 respectively.Now, we add ( 12) and ( 13) which yields As we rearrange this inequality we obtain and eventually assumption (3) on H yields the assertion of the proposition.

⊓ ⊔
Even though the above continuity result for proximal mappings will turn out to be an important tool for the proof of local acceleration of the Proximal Newton method, we still have to deduce some crucial properties of the full update step ∆x.These will help us to characterize optimal solutions of (1) as fixed points of the method and then verify local acceleration afterwards.

Lemma 2
The undamped update steps computed via (5) are descent directions of the composite objective functional, i.e., the following estimate holds: Proof Since f is assumed to be continuously differentiable and g suffices the estimate (4), we can deduce the following bound on the composite objective functional: Let us now deduce an estimate for the term in brackets on the right-hand side of (14).
To this end, we remember the proximal mapping representation of updated iterates in (9) and consider the corresponding estimate from Proposition 2 for ξ := x which is given by or equivalently which we insert into (14) and directly obtain the asserted inequality.Note that over the course of this section we assume the positivity of the sum κ 1 + κ 2 which indeed implies from above that ∆x is a descent direction.

⊓ ⊔
As mentioned beforehand, this directly enables a more insightful characterization of optimal solutions of the composite minimization problem.
Proposition 3 Consider f continuously differentiable with Lipschitz derivative as well as H ∈ L(X, X * ) which satisfies (3) with κ 1 + κ 2 > 0 and κ 2 from (4) for g.Then, the search direction ∆x * according to ( 5) is zero at every local minimizer x * ∈ X of problem (1).In particular, we obtain the fixed point equation ) for sufficiently small s > 0. By Lemma 2 this implies ∆x = 0.

⊓ ⊔
Having in mind these properties of update steps and optimal solutions in addition to the continuity result for scaled proximal mappings from Proposition 1, we can now prove the local acceleration result for our Proximal Newton method with undamped steps near optimal solutions.
For the following we require f ′ to be semi-smooth near an optimal solution x * of our problem (1) with respect to Hx, i.e., the following approximation property holds: As pointed out before, adequate definitions of Hx can be given via the Newton derivative Hx ∈ ∂ N f ′ (x) for Lipschitz-continuous operators in finite dimension as well as for corresponding superposition operators, cf.Chapter 3.2 in [29].
Theorem 1 (Fast Local Convergence) Suppose that x * ∈ X is an optimal solution of problem (1).Consider two consecutive iterates x, x + ∈ X which have been generated by the update scheme from above and are close to x * .Furthermore, suppose that (15) holds for Hx in addition to the assumptions from the introductory section with κ 1 +κ 2 > 0. Then we obtain: Proof Consider the proximal mapping representations deduced above for both the updated iterate x + in ( 9) and for the optimal solution x * in Proposition 3 via Next, we directly take advantage of these identities together with the continuity result for scaled proximal mappings from Proposition 1 in order to deduce the estimate where in the last step also the semi-smoothness of f ′ played a crucial role.This directly verifies the asserted local acceleration result.

⊓ ⊔
In particular, this implies local superlinear convergence of our Proximal Newton method if we can additionally verify global convergence to an optimal solution.Note that even for the local acceleration result, ellipticity of Hx ∈ ∂ N f ′ (x) does not necessarily have to be demanded.Also here, all that matters is strong convexity of the composite functional.This might be surprising since what actually accelerates the method is the second order information on the (possibly non-convex) but differentiable part f with semi-smooth derivative f ′ .As a consequence, this means that the (strong) convexity of g can not only contribute to the well-definedness of update steps as solutions of ( 5) but also to the local acceleration of our algorithm.
The main reason for this generalization of the local acceleration result is our slightly generalized notion of proximal mappings.In particular, we did not deduce (firm) nonexpansivity in the scaled norm as for example in [16] but also there took advantage of the strong convexity of the composite objective functional in the form of assumptions ( 3) and ( 4) with κ 1 + κ 2 > 0.
Note that for the above results to hold it was crucial that the current iterate x is already close to an optimal solution of problem (1) which is why over the course of the next section we want to address one possibility to globalize our Proximal Newton method.We will see that eventually we will be in the position to use undamped update steps for the computation of iterates and thereby benefit from the local acceleration result in Theorem 1.

Globalization via an additional norm term
Let us consider the following modification of (5) and define the damped update step at a current iterate x as a minimizer of the following modified model functional: As a consequence, we define Here ω > 0 is an algorithmic parameter that can be used to achieve global convergence.
Setting H := Hx + ωR with the Riesz-isomorphism R : X → X * we observe that ( 16) is of the form (5) with κ1 = κ 1 + ω, so that the existence and regularity results of the previous sections apply.
The updated iterate then takes the form x + (ω) := x + ∆x(ω).Apparently, the update step in ( 16) is well defined if ω + κ 1 + κ 2 > 0. Consequently, for what follows, we only consider ω > −(κ 1 + κ 2 ) in order to guarantee unique solvability of the update step subproblem.The full update steps from ( 5) are here damped along a curve in X which is parametrized by the regularization parameter ω ∈] − (κ 1 + κ 2 ), ∞[.However, note that here the Hilbert space structure of X is also important for the strong convexity of functions of the form g + ω 2 • 2 X with g as in (4) for arbitrary κ 2 ∈ R. In a general Banach space setting, we can not assume additional norm terms to compensate disadvantageous convexity assumptions, cf.[2], Remark 5.18].
Let us now take a look at how we can rearrange the subproblem for finding an updated iterate by using the scalar product •, • X as well as the Riesz-Isomorphism R: Note that Hx + ωR : X × X → R satisfies (3) with constant (κ 1 + ω) such that the combination of g and Hx + ωR still suffices the requirements for the results from Proposition 2 for all ω > −(κ 1 + κ 2 ).Additionally, the results of Lemma 1 apparently also hold in the globalized case.The formulation of updated iterates via the above scaled proximal mapping enables us to establish some helpful properties of the damped update steps ∆x(ω).
Proposition 4 Under the assumptions (3) for Hx and (4) for g the inequality holds for the update step ∆x(ω) as defined in (16) and arbitrary −( Proof The proof here follows along the same lines as the derivation of the auxiliary estimate for the bracket term in the proof of Lemma 2. Due to the structure of the update formula in (17) we can take advantage of the estimate from Proposition 2 with ϕ = (Hx + ωR)x − f ′ (x), H = Hx + ωR and ξ = x which yields u = P H g (x) = x + and thereby This inequality is equivalent to the asserted estimate.

⊓ ⊔
With the above estimate for damped update steps at hand, let us now formulate a criterion for sufficient decrease which will help us to verify a global convergence result of our Proximal Newton method.We call a value of the regularization parameter ω > −(κ 1 + κ 2 ) admissible for sufficient decrease if the inequality for some prescribed γ ∈]0, 1[ is satisfied.We may interpret λω(∆x(ω)) as a predicted decrease and rewrite the condition (18) as follows: This is the classical ratio of actual decrease and predicted decrease which is often used for trust-region algorithms.Before now trying to verify that the descent criterion in (18) is fulfilled for sufficiently large values of ω, we note that the assertion in Proposition 4 implies the insightful estimate Hx ∆x(ω) which yields that once the criterion is satisfied, update steps unequal to zero provide real descent in the composite objective function F according to Let us now take a look at the existence of sufficiently large values of the regularization parameter ω.Here, the Lipschitz-continuity of f ′ comes into play for the first time.
Lemma 3 For f , Hx and g as above the criterion for sufficient descent introduced via Proof By our lower bound on ω and ( 19) we obtain: The Lipschitz-continuity of f ′ directly yields the estimate from where we immediately obtain an estimate for the descent in the composite objective functional via This estimate is equivalent to (18) and thereby concludes the proof of the assertion.⊓ ⊔ Additionally, for global convergence, it turns out that we have to guarantee that A simple way to achieve this is to impose the following restriction: for some prescribed upper bound M .Due to (19) this can be achieved for a sufficiently large choice of ω k .All in all, this results in the following algorithm: Decrease ω k to some ω k+1 < ω k for next iteration; else Increase ω k appropriately; end end Algorithm 1: Second order semi-smooth Proximal Newton algorithm damped according to (16) Now that we have formulated the algorithm and can be sure that we can always damp update steps sufficiently such that they yield descent according to (18), we will verify the stationarity of limit points of the sequence of iterates generated by Algorithm 1.To this end, we will first prove that the norm of the corresponding update steps converges to zero along the sequence of iterates.Lemma 4 Let (x k ) ⊂ X be the sequence generated by the Proximal Newton method globalized via (16) for admissible values of the regularization parameter ω k starting at any x 0 ∈ domg.Then either Proof By (20) the sequence F (x k ) is monotonically decreasing.Thus, either F (x k ) → −∞ or F (x k ) → F for some F ∈ R and thus in particular F (x k ) − F (x k+1 ) → 0. Since γ > 0, also λω k (∆x(ω)) → 0. Since, by assumption, If we take a look at the optimality conditions for the step computation in (16) at x + (ω), we obtain with the Frechét-subdifferential of g Hx ω : the right-hand side.This directly yields the existence of some η ∈ ∂ This implies the estimate: Thus, by Lemma 4 and as long as L f < ∞ exists, Hx k L(X,X * ) ≤ M is bounded, and ω k is bounded.The latter can be guaranteed via Lemma 3 if the "appropriate increase" of ω k is done by no more than a fixed factor ρ > 1.
Remark 4 With some additional technical effort, the assumption on Lipschitz-continuity of f ′ could be relaxed to a uniform continuity assumption.
Observe that we can indeed interpret ∆x k (ω k ) X ≤ ε as a condition for the optimality of our the subsequent iterate up to some prescribed accuracy.However, small step norms ∆x k (ω k ) X can also occur due to very large values of the damping param- eter ω k as a consequence of which the algorithm would stop even though the sequence of iterates is not even close to an optimal solution of the problem.In order to rule out this inconvenient case, we consider the scaled version (1 + ω k ) ∆x k (ω k ) X as the stopping criterion in Algorithm 1.Now we are in the position to discuss subsequential convergence of our algorithm to a stationary point.In the following, we will assume throughout that F (x k ) is bounded from below.We start with the case of convergence in norm: Theorem 2 Under the assumptions explained in the introductory section, all accumulation points x (in norm) of the sequence of iterates (x k ) generated by the Proximal Newton method globalized via (16) are stationary points of problem (1).
Proof Let us consider a modified version of our minimization problem as in (10) in Lemma 1 and choose q(x) = 1 2 Q(x) 2 for Q : X × X → R such that g = g + q is (strongly) convex on its domain.This is always possible by (4).According to Lemma 1, the sequence of iterates remains unchanged and step computation takes the form with first order optimality conditions where ∂g(x k+1 ) denotes the convex subdifferential of g at x k+1 .Consequently, we know that there exists some ηk ∈ ∂g(x k+1 ) such that with the remainder term on the right-hand side given by holds.As before, the remainder term rx k ∆x k (ω k ) = rx k ∆x k (ω k ) tends to zero for k → ∞, i.e., we have η := lim k→∞ ηk = −f ′ (x) + Qx.The definition of the convex subdifferential ∂g together with the lower semi-continuity of g directly yields for any u ∈ X which proves the inclusion η ∈ ∂g(x).The evaluation of the latter limit expression can easily be retraced by splitting In particular, we recognize η ∈ ∂g(x) as −f ′ (x) + Qx ∈ ∂g(x) and equivalently for the Frechét-subdifferential ∂ F .This implies 0 ∈ ∂ F F (x), i.e., the stationarity of our limit point x.

⊓ ⊔
Also note that in general the above global convergence result does not rely on the strong convexity of the composite objective function F but yields stationarity of limit points also in the non-convex case of κ 1 + κ 2 < 0 and ω k > −(κ 1 + κ 2 ) chosen adequately.In particular, this ensures that also independent of strong convexity assumptions near optimal solutions, the algorithm approaches the optimal solution and can then benefit from additional convexity at later iterations.
While bounded sequences in finite dimensional spaces always have convergent subsequences, we can only expect weak subsequential convergence in general Hilbert spaces in this case.As one consequence, existence of minimizers of nonconvex functions on Hilbert spaces can usually only be established in the presence of some compactness.On this count we note that in (23) even weak convergence of x k ⇀ x would be sufficient.Unfortunately, in the latter case we cannot evaluate f ′ (x k ) → f ′ (x).
In order to extend our proof to this situation, we require some more structure for both of the parts of our composite objective functional.To this end, we remember the following well-known definition of compact operators: Definition 1 A linear operator K : X → Y between two normed vector spaces X and Y is called compact if one of the following equivalent statements holds: 1) The image of the unit ball of X is relatively compact in Y (, i.e., its closure is compact).
2) For any bounded sequence (xn) n∈N ⊂ X the image sequence (Kxn) n∈N ⊂ Y contains a strongly convergence subsequence xn k k∈N ⊂ X.
With this notion at hand, we can formulate the following global convergence theorem: where K is a compact operator.Additionally, assume that g + f is convex and weakly lower semi-continuous in a neighborhood of stationary points of (1).Then weak convergence of the sequence of iterates x k ⇀ x suffices for x to be a stationary point of (1).
If F is strictly convex and radially unbounded, the whole sequence x k converges weakly to the unique minimizer x * of F .If F is κ-strongly convex, with κ > 0, then Proof We can employ the same proof as above replacing g by g + f and using that f′ (Kx k ) → f′ (K x) in norm, by compactness.This then shows finally This again constitutes 0 ∈ ∂ F F (x) and thereby the stationarity of the weak limit point x.
Let us now consider the second assertion: F being strictly convex as well as radially unbounded yields that problem (1) has a unique solution x * .Additionally, we know that our sequence of iterates is bounded as a consequence of which we can select a weakly convergent subsequence.The first assertion of the theorem then implies that the limit of each subsequence we choose is a stationary point of problem (1), and thus by convexity to the unique optimal solution x * .A standard argument then shows that the whole sequence converges to x * weakly.
If F is κ-strongly convex, then as discussed below ( 4) the diameter the level sets

Second order semi-smoothness
In order to be able to benefit from the local acceleration result in Theorem 1, we have to ensure that under the assumptions on F stated in Section 1 eventually also full steps are admissible for sufficient descent according to our criterion formulated in (18).To this end, we want to introduce a new notion of differentiability, which we call second order semi-smoothness, and investigate how it interacts with our Proximal Newton method.
For the smooth part f of our composite objective function F we define a second order semi-smoothness property at some x * ∈ domf by for any ξ ∈ X.This will be precisely the assumption that we need to conclude transition to fast local convergence in the following section.
We give a general definition for operators.Denote by L (2) (X, Y ) the normed space of bounded vector valued bilinear forms X × X → Y , equipped with usual norm: Definition 2 Let X, Y be normed linear spaces and let D ⊂ X be a neighborhood of x * .Consider a continuously differentiable operator T : D → Y , and a bounded mapping We call T second order semi-smooth at x * ∈ X with respect to T ′′ , if the following estimate holds: Since T ′′ is evaluated at x * + ξ, the choice of T ′′ is far from unique.Twice continuously differentiable operators apparently are second order semi-smooth: Proposition 5 Assume that T is twice continuously differentiable at x * .Then T is second order semi-smooth at x * with respect to the ordinary second derivative T ′′ .
Proof This follows by a simple computation: Both terms in square brackets are o( ξ ).The first by Fréchet differentiability of T , the second by continuity of T ′′ (x).

⊓ ⊔
It is an obvious remark that the sum of two second order semi-smooth functions is second order semi-smooth again with linear and quadratic terms defined via sums.Furthermore, the following chain rule can be shown: Theorem 4 Suppose that S : D S → Y and T : D T → Z with S(D S ) ⊂ D T are second order semi-smooth at x * ∈ D S and y * = S(x * ) with respect to S ′′ and T ′′ , respectively.Then T • S is second order semi-smooth with respect to (T • S) ′′ , defined as follows: Proof We introduce the notations y * = S(x * ), x = x * + ξ, y = S(x), and η = y − y * .
With these prerequisites we can, as usual for chain rules, split the remainder term: We will show that each of the expressions ( 25)-( 28) is o( ξ 2 X ).For (25) this follows from second order semi-smoothness of T , while second order semi-smoothness of S implies the desired result for (26).Continuity of T ′ and boundedness of S ′′ yield that ( 27) is o( ξ 2 X ).Finally, ( 28) can be reformulated via the third binomial formula: By continuous differentiablity of S (which is a prerequisite of second order semismoothness by our definition) we estimate: which finally yields the desired result.

⊓ ⊔
Remark 5 In the case T ′ (y * ) = 0, we observe from ( 26) that S only needs to be continuously differentiable and we may set S ′′ = 0.
Second order semi-smoothness of T and semi-smoothness of T ′ as in (15) x , x = 0, and h ′ (0) = 0.The cubic asymptotics of h suggest that T ′′ (x) ≡ 0 is a possible definition for second order semi-smoothness of h at x * = 0 as above.Apparently, we obtain for x ∈ R and δx = x − x * = x: i.e., that h is indeed second order semi-smooth at x * = 0 with respect to T ′′ .On the other hand, we have which implies that h ′ is indeed not semi-smooth at x * = 0 with respect to the same T ′′ , cf. (15).However, in many cases of practical interest, both conditions can be shown to hold.
For instance, the function φ(x) = max{0, x} 2 is second order semi-smooth at the point x = 0 with respect to as well as twice Fréchet differentiable (and thus also second-order semi-smooth, cf. Proposition 5) at any other point x = 0 with the same φ ′′ (ξ).By standard techniques we can lift this property to superposition operators on Lp-spaces for appropriate p.
For convenience, we recapitulate the following lemma, which is a slight generalization of a standard result on continuity of superposition operators.

Lemma 5
Let Ω a measurable subset of R d , and ψ : R × Ω → R. For each measurable function x : Ω → R assume that the function Ψ (x), defined by Ψ (x)(t) = ψ(x(t), t) is measurable.Let x * ∈ Lp(Ω, R) be given.Then the following assertion holds: If ψ is continuous with respect to x at (x * (t), t) for almost all t ∈ Ω, and Ψ maps Lp(Ω, R) into Ls(Ω, R) for 1 ≤ p, s < ∞, then Ψ is continuous at x * in the norm topology.

⊓ ⊔
The standard text book result requires ψ to be a Caratheodory function, and thus in particular continuous in x for all t ∈ Ω.This assumption, is slightly weakened here to the almost everywhere sense.It is known, for example, that pointwise limits and suprema of Caratheodory functions yield superposition operators that map measurable functions to measurable functions.The mapping φ ′′ as defined above is an example.
Importantly, this result is not true for the case p < s = ∞.
Proposition 6 Consider a real function φ : R → R with globally Lipschitz-continuous derivative φ ′ : R → R, which is second order semi-smooth with respect to a bounded function φ ′′ : R → R. Let Ω ⊂ R d be a set of finite measure and assume that the composition φ ′′ • u is measurable for any measurable function u : Ω → R. Let p > 2. Then for each x ∈ Lp(Ω) the superposition operator Φ : Proof Consider a representative of x ∈ Lp(Ω) and the function which is defined for t = 0 and rx(ω, t) := 0 for t = 0.By Lipschitz-continuity of φ ′ and boundedness of φ ′′ we observe that rx is bounded uniformly on Ω × R. Thus, the superposition operator Rx : Lp(Ω) → Ls(Ω) : Rx(ξ)(ω) = rx(ω, ξ(ω)) is well defined for any 1 ≤ s ≤ ∞.By second order semi-smoothness rx(ω, •) is continuous at t = 0 for almost all ω ∈ Ω.Hence, by Lemma 5 Rx is continuous as an operator at ξ = 0 for any s < ∞.By the Hölder inequality with 1/s + 2/p = 1 we conclude the desired estimate: Unsurprisingly and in analogy to the theory of semi-smooth superposition operators, there is a norm gap in the sense that Proposition 6 is false for p = 2.This is closely related to the so call two-norm discrepancy (cf.e.g.[26]).As in the above example, φ ′′ (ξ) has a discontinuity at ξ = 0, so we cannot expect that Φ ′′ is a continuous mapping on a given open set.However, we can show the following result: Proof We apply Lemma 5 to the superposition operator Φ′′ (x)(ω) := φ ′′ (x(ω)), which maps Lp(Ω) → Ls(Ω) and the use the Hölder inequality to conclude:

⊓ ⊔
In our example φ(x) = max{0, x} 2 fulfills the hypothesis of this theorem at x * ∈ Lp(Ω), if x * (ω) = 0 only on a set of measure 0 in Ω.This kind of regularity assumption can also be found frequently in the literature on semi-smooth Newton methods (cf.e.g.[11]).

Transition to Fast Local Convergence
Let us now turn our attention back to our Proximal Newton method and consider the admissibility of undamped update steps near optimal solutions of problem (1).Both the semi-smoothness of f ′ from ( 15) and the second order semi-smoothness of f from (24) will contribute a crucial part to the proof of this result.Additionally, the local acceleration result from Theorem 1 will play an important role.However, an algorithm that tests in every iterate, whether the undamped Newton step is acceptable is likely to compute many unnecessary trial iterates during the early phase of globalization.Thus, it is of interest, whether damped Newton steps are acceptable as well close to the solution.
In order to establish the corresponding proposition of admissibility we will first have to investigate the relation between damped and undamped steps more closely.
Lemma 6 Let Hx be a bilinear form as in (3) and assume that g suffices (4) where κ 1 + κ 2 > 0 holds and x ∈ X is arbitrary.Then the damped update step ∆x(ω) from ( 16) and the undamped update step ∆x from (5) satisfy the estimates for any ω ≥ 0.
Proof The above set of estimates can all be deduced from adequate proximal representations of the respective update steps.We can characterize the undamped step via ∆x = x + − x where the updated iterate is given by Now, consider the corresponding inequality from Proposition 2 for ϕ = Hx(x) − f ′ (x), H = Hx and ξ := x + (ω) given by which can be rearranged to a more useful form via For the damped update step we want to consider a different form than in (17) and attribute the additional norm term ω 2 • 2 X to the lower argument function g.This results in the proximal representation The deduction of the respective inequality induced by the first order conditions of the proximal subproblem will turn out to be slightly more complicated.We use H = Hx and ϕ = Hx(x) + ωRx − f ′ (x) together with ξ = x + in Proposition 2. Note here that the lower argument function g + ω 2 • 2 X satisfies (4) with constant κ 2 + ω.Thus, we obtain We bring the Riesz-term ωR x, ∆x − ∆x(ω) to the right-hand side of (34) and recognize This inequality will be of importance once more later on.For now, we estimate the term Now, we add (33) and (36) which yields Here we can use assumption (3) on Hx and rearrange the resulting estimate to This is exactly the first asserted inequality (31) if we divide by ∆x − ∆x(ω) X which we can assume to be non-zero without loss of generality.From here, we can directly deduce the second part of (32) since we can take advantage of (31) by The first part of (32) on the other hand requires some more consideration.We start at (35) but now take another route and directly add it to (33) which yields and thereby as we use (3) for Hx.All prefactors in (37) are positive due to our assumptions such that the first part of (32) follows.This completes the proof.

⊓ ⊔
The equivalence result for damped and undamped update steps in the form of (32) enables the proof of the following Corollary which will turn out to be useful for the admissibility of damped steps close to optimal solutions.
Corollary 2 Close to an optimal solution x * of (1) we can find constants c 1 , c 2 > 0 such that the following estimates hold: Proof For the deduction of both asserted inequalities, we will take advantage of the local superlinear convergence stated in Theorem 1, i.e., x + − x * X = o x − x * X in the limit of x → x * .Consequently, we can write for some function ψ : [0, ∞[→ [0, ∞[ with ψ(t) → 0 for t → 0. With this helpful representation at hand, we estimate By the definition of ψ above, this directly implies the first asserted inequality.We can deduce the second one similarly quickly via We can assume ψ x − x * X < 1 close to the optimal solution x * and thereby deduce with the additional help of (32).Taking into account that ω remains bounded completes the proof of the second asserted inequality.

⊓ ⊔
Now we are in the position to prove the admissibility of both undamped and damped steps close to optimal solutions of the composite minimization problem (1).We will see that undamped steps will generally be admissible whereas for the admissibility of damped steps we will have to assume an additional property of the second order model bilinear forms Hx.
Proposition 8 Let x * ∈ X be an optimal solution of (1) and let Hx ∈ ∂ N f ′ (x) suffice (3) as well as g suffice (4) with κ 1 + κ 2 > 0 in a neighborhood of x * .Additionally, suppose that (24) holds for f as well as (15) holds for f ′ at x * .
Steps as in (16) for any ω ≥ 0 are admissible for sufficient descent according to (18) for any γ < 1 if the second order bilinear forms Hx satisfy a bound of the form In particular: i) full steps ∆x as defined in (5) are eventually admissible.
ii) if the mapping x → Hx is continuous at x = x * , then eventually all steps are admissible.
Proof Let us take a look at the descent in the composite objective function F when performing an update step and see which estimates we can deduce with the help of the assumptions and results preceding this proposition.
We will denote the update by ∆x(ω) or x + (ω) = x + ∆x(ω) respectively for some arbitrary ω ≥ 0 such that the notation comprises both the damped and undamped case for the update step.Now, we write and estimate the descent in the smooth part of the objective function f (x + ∆x(ω)) − f (x).By telescoping we obtain the following identity: (40) In the last step we used second order semi-smoothness of f and semi-smoothness of f ′ at x * .
We observe that the only critical term is We conclude Hx ∆x(ω) by Corollary 2 and then directly deduce . Now, we have to consider an estimate for the critical term ρ defined as above.We can define a prefactor function γ : X × [0, ∞[→ R for the admissibility criterion (18) by which should be larger than some γ ∈]0, 1[.We may assume that the numerator of the latter expression is non-positive, otherwise this inequality is trivially fulfilled.Thus, by decreasing the positive denominator via (19) we obtain that for any ε > 0 there is a neigbourhood of x * , such that for any iterate x in this neighbourhood where the latter ε-term arises from o( ∆x(ω) 2 X )/ ∆x(ω) 2 X and can be chosen arbitrarily small for ∆x(ω) X → 0 which holds by the estimate The ρ-term then vanishes by assumption (39), which is implied by i) or ii) in the following way:

⊓ ⊔
The seemingly paradoxical behavior that full Newton steps yield a better model approximation than damped Newton steps comes from the fact that f ′ is not Fréchet differentiable in general.The only prerequisite that we can take advantage of is (24) at fixed x * .
The continuity assumption ii) on Hx can be verified for superposition operators via Proposition 7, it holds, for example, for max(0, t) 2 , if x * (ω) = 0 only on a set of zero measure.

Numerical Results
We consider the following problem on with parameters c > 0 and α, β ∈ R as well as a force field ρ : Ω → R. The norm • R 2 denotes the Euclidean 2-norm on R 2 .In the sense of the theory of the preceding sections we can identify the smooth part of F as f : We have to note here that f technically does not satisfy the assumptions made on the smooth part of the composite objective functional specified above in the case α = 0 due to the lack of semi-smoothness of the corresponding squared max-term.The use of the derivative ∇u instead of function values u creates a norm-gap which cannot be, as usual, compensated by Sobolev-embeddings and hinders the proof of semi-smoothness of the respective superposition operator.However, we think that slightly going beyond the framework of theoretical results for numerical investigations can be instructive.
For our implementation of the solution algorithm we chose the force field ρ to be constant on its domain and equal to some so called load-factor ρ > 0 which we will from now on refer to as simply ρ.Consequently, the non-smooth part of the objective functional g only consists of the scaled integral over the absolute value term which apparently also satisfies the specifications made on g before.Note that the underlying Hilbert space is given by X = H 1 0 (Ω, R) which also determines the norm choice for regularization of the subproblem.
In the following we will dive deeper into the specifics of our implementation of the algorithm: In order to differentiate the smooth part of the composite objective functional and create a second order model of it around some current iterate, we take advantage of the automatic differentiation software package adol-C, cf.[30].With the second order model at hand we can then consider subproblem ( 16) which has to be solved in order to obtain a candidate for the update of the current iterate.For the latter endeavor we employ a so called Truncated Non-smooth Newton Multigrid Method with a direct linear solver.We can summarize this method as a mixture of exact, non-smooth Gauß-Seidel steps for each component and global truncated Newton steps enhanced with a line-search procedure.The scheme is analytically proven to converge for convex and coercitive problems; for a more detailed description of the algorithm and its convergence properties consider [10].
However, the most delicate issue concerning the implementation of our algorithm and its application to the problem described above is the choice of the regularization parameter ω ≥ 0 along the sequence of iterates (x k ) ⊂ X.For now, we want to confine ourselves to displaying the convergence properties of the class of Proximal Newton methods in the scenario presented above and not attach too much value to algorithmic technicalities.As a consequence, we took the rather heuristic approach of simply doubling ω in the case that the sufficient descent criterion (18) (for γ = 1 2 ) is not satisfied by the current update step candidate and on the other hand multiplying ω by 1 2 n where n ∈ N denotes the number of consecutive accepted update steps.
The latter feature ensures that local fast convergence is recognized by the algorithm and the regularization parameter quickly decreases once the iterates come close to the minimizer.For the superlinear convergence demonstrated in Theorem 1 to arise, undamped update steps have to be conducted, i.e., the regularization parameter has to be zero and not merely sufficiently small.For this reason we set ω = 0 once it reaches a threshold value ω 0 following the procedure described beforehand.On the contrary, if a full update step is not accepted by the sufficient descent criterion, we set ω = ω 0 and from there on proceed as usual.
Even though the choice of ω considered here is rather heuristic and not problemspecific at all, it stands in perfect conformity with the theory established over the course of the previous sections and also successfully displays the global convergence and local acceleration of our Proximal Newton method for the model problem of minimizing (41) over H 1 0 (Ω, R).Moreover, we added a threshold value for the descent considering the modified quadratic model λω ∆x(ω) as a stopping criterion for our algorithm, i.e., the computation stops as soon as we have |λω ∆x(ω) | < 10 −14 for an admissible step ∆x(ω).

Figure 1a constitutes a logarithmic plot of correction norms ∆x
for constant values of c = 80, β = 40 and ρ = −100 while α is increased from 0 to 240 in equidistant steps of 40.Quite predictably from the structure of the functional, increasing values of α make the minimization problem more and more difficult to solve for our method but eventually the local superlinear convergence is evident also for larger values of α. Figure 1b shows the corresponding values of the regularization parameter ω which were used along the accepted steps on the way to the minimizer.
Apart from those considerations, it is always very insightful to compare the performance of our algorithm with other existing methods for similar problems to the one introduced in (41).To this end, we considered two alternatives: Firstly, we used a simple Proximal Gradient procedure with H 1 -regularization by ignoring the second order bilinear form Hx in the update step subproblem (16) and secondly, we took advantage of acceleration strategies for such Proximal Gradient methods by implementing the FISTA-algorithm as presented in [21].In Figures 2a and 2b, the norms of update steps are plotted for both variants for solving the same problem as above, i.e., c = 80, β = 40 and ρ = −100 while α we increase in equidistant steps of 40 from 0 to 160.
We recognize a clear difference in performance in the transition both from Proximal Gradient to FISTA and from FISTA to Proximal Newton across all α-variations of the considered toy problem.Even in the rather mild case of α = 0 Proximal Gradient takes N = 5326 and FISTA takes N = 2498 iterations to reach the minimizer.Note that in this case we only used four uniform grid refinements due to the very high computational effort of the simulations which does not diminish the qualitative significance of our observations.Furthermore, Table 1 displays the total number of iterations required in order to reach the minimizer of (1) considering different grid sizes for the discretization of the objective function for the values of the prefactor α investigated beforehand.In the case α > 0 we observe some moderate increase in iteration numbers, which is attributed to the presence of a norm-gap in the corresponding term.as well as discuss some possible improvements on the algorithm and its implementation which are a topic of future research: We have developed a globally convergent and locally accelerated Proximal Newton method in a Hilbert space setting which demands neither second order differentiability of the smooth part nor convexity of either part of the composite objective function.Concerning differentiability, we have introduced the notion of second order semi-smoothness.Concerning non-convexity, our theoretical framework uses quantified information on lacking convexity instead of simply resorting to a different first order update scheme in the non-convex case.The globalization scheme takes advantage of a proximal arc search procedure and thereby establishes stationarity of all limit points of the sequence of iterates.Additional convexity close to optimal solutions of the original problem leads to local acceleration of our method which in particular does not rely on strong convexity of the smooth part, but only on the strong convexity of the composite functional thanks to a well-thought definition of proximal mappings within the theoretical framework.The application of our method to actual function space problems is enabled by using an efficient solver for the step computation subproblem, the Truncated Non-smooth Newton Multigrid Method.We have displayed global convergence and local acceleration of our algorithm by considering a toy model problem in function space.
As we have already mentioned beforehand, the choice of the regularization parameter we employed here is rather heuristic and not problem-specific at all.This issue can be addressed by using an estimate for the residual term of the quadratic model established in subproblem (16), as seen in [31] for adaptive affine conjugate Newton methods where non-convex but smooth minimization problems for nonlinear elastomechanics have been thoroughly investigated.The idea behind the procedure is to evaluate actual residual terms for formerly computed correction candidates and then use them as a regularization parameter for the computation of the next update step candidate.
Another focal concern of our future work is taking into account inexactness in the computation of update steps.Inexact solutions of subproblem (16) are then required to at least satisfy certain inexactness criteria which still give access to similar global and
ε do Compute a trial step ∆x k (ω k ) according to (16); if bound (21) and sufficient descent criterion (18) are satisfied then Update current iterate to x k+1 ← x k + ∆x k (ω k ); are closely related but are not equivalent in general.Even in the case of T ′′ (x) := ∂ N T ′ (x) we cannot conclude one condition from the other, e.g.via the fundamental theorem of calculus, because of the lack of continuity of ∂ N T ′ .Let us shortly give a both simple and illustrative example: Consider the function h

8 Conclusion
Now that we have sufficiently displayed the global and local convergence properties of our Proximal Newton method, it is time to both reflect on what we have achieved here

Table 1 :
Number of total iterations N for different grid sizes h and prefactor values α for fixed parameters β = 40 and c = 80.