Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification

For regularized optimization that minimizes the sum of a smooth term and a regularizer that promotes structured solutions, inexact proximal-Newton-type methods, or successive quadratic approximation (SQA) methods, are widely used for their superlinear convergence in terms of iterations. However, unlike the counter parts in smooth optimization, they suffer from lengthy running time in solving regularized subproblems because even approximate solutions cannot be computed easily, so their empirical time cost is not as impressive. In this work, we first show that for partly smooth regularizers, although general inexact solutions cannot identify the active manifold that makes the objective function smooth, approximate solutions generated by commonly-used subproblem solvers will identify this manifold, even with arbitrarily low solution precision. We then utilize this property to propose an improved SQA method, ISQA+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{+}$$\end{document}, that switches to efficient smooth optimization methods after this manifold is identified. We show that for a wide class of degenerate solutions, ISQA+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{+}$$\end{document} possesses superlinear convergence not only in iterations, but also in running time because the cost per iteration is bounded. In particular, our superlinear convergence result holds on problems satisfying a sharpness condition that is more general than that in existing literature. We also prove iterate convergence under a sharpness condition for inexact SQA, which is novel for this family of methods that could easily violate the classical relative-error condition frequently used in proving convergence under similar conditions. Experiments on real-world problems support that ISQA+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{+}$$\end{document} improves running time over some modern solvers for regularized optimization.


Introduction
Consider the following regularized optimization problem: where E is a Euclidean space with an inner product •, • and its induced norm • , the regularizer Ψ is extended-valued, convex, proper, and lower-semicontinuous, f is continuously differentiable with Lipschitzcontinuous gradients, and the solution set Ω is non-empty.This type of problems is ubiquitous in applications such as machine learning and signal processing (see, for example, [8,9,40]).One widely-used method for (1) is inexact successive quadratic approximation (ISQA).At the tth iteration with iterate x t , ISQA obtains the update direction p t by approximately solving where H t is a self-adjoint positive-semidefinite linear endomorphism of E. The iterate is then updated along p t with a step size α t > 0.
ISQA is among the most efficient for (1).Its variants differ in the choice of H t and α t , and how accurately (2) is solved.In this class, proximal Newton (PN) [22,26] and proximal quasi-Newton (PQN) [38] are popular for their fast convergence in iterations.Regrettably, their subproblem has no closed-form solution as H t is nondiagonal, so one needs to use an iterative solver for (2) and the running time to reach the accuracy requirement can hence be lengthy.For example, to attain the same superlinear convergence as truncated Newton for smooth optimization, PN that takes H t = ∇ 2 f requires increasing accuracies for the subproblem solution, implying a growing and unbounded number of inner iterations of the subproblem solver.Its superlinear convergence thus gives little practical advantage in running time.On the contrary, in smooth optimization, one can solve (2) with a bounded cost by either conjugate gradient (CG) or matrix factorizations since Ψ ≡ 0. The advantage of second-order methods over the first-order ones in regularized optimization is therefore not as significant as that in smooth optimization.
A possible remedy when Ψ is partly smooth [24] is to switch to smooth optimization after identifying an active manifold M that contains a solution x to (1) and makes Ψ confined to it smooth.We say an algorithm can identify M if there is a neighborhood U of x such that x t ∈ U implies x t+1 ∈ M, and call it possesses the manifold identification property.Unfortunately, for ISQA, this property in general only holds when (2) is always solved exactly.Indeed, even if each p t is arbitrarily close to the corresponding exact solution, it is possible that no iterate lies in the active manifold, as shown below.
Example 1 Consider the following simple example of (1) with whose only solution is x = (2, 0), and x 1 is smooth relative to M = {x | x 2 = 0} around x. Consider {x t } with x t 1 = 2 + f (t), x t 2 = f (t), for some f (t) > 0 with f (t) ↓ 0, and let H t ≡ I, α t ≡ 1, and p t = x t+1 − x t .The optimum of ( 2) is p t * = x − x t , so x t − x = O(f (t)) and p t − p t * = O(f (t)).As f is arbitrary, both the subproblem approximate solutions and their corresponding objectives converge to the optimum arbitrarily fast, but x t / ∈ M for all t.
Moreover, some versions of inexact PN generalize the stopping condition of CG for truncated Newton to require r t → 0, where r t := arg min r r , subject to r ∈ ∂ p Q Ht (p t ; x t ), but Example 1 gives a sequence of { r t } converging to 1, hinting that such a condition might have an implicit relation with manifold identification.Interestingly, in our numerical experience in [20,21,27], ISQA with approximate subproblem solutions, even without an increasing solution precision and on problems that are not strongly convex, often identifies the active manifold rapidly.We thus aim to provide theoretical support for such a phenomenon and utilize it to devise more efficient and practical methods that trace the superior performance of second-order methods in smooth optimization.
In this work, we show that ISQA essentially possesses the manifold identification property, by giving a sufficient condition for inexact solutions of (2) in ISQA to identify the active manifold that is satisfied by the output of most of the widely-used subproblem solvers even if (2) is solved arbitrarily roughly.We also show that r t ↓ 0 is indeed sufficient for manifold identification, so PN can achieve superlinear convergence in a more efficient way through this property.When the iterates do not lie in a compact set, it is possible that the iterates do not converge, in which case even algorithms possessing the manifold identification property might fail to identify the active manifold because the iterates never enter a neighborhood that enables the identification.Therefore, we also show convergence of the iterates under a sharpness condition widely-seen in real-world problems that generalizes the quadratic growth condition and the weak sharp minima.Under convexity, this sharpness condition is equivalent to a type of the Kurdyka-Lojasiewicz (KL) condition [18,29], but convergence of general ISQA methods under the KL condition is unknown since the inexactness condition can easily violate the relative-error condition needed in [2,4], and thus our analysis provides a novel approach to obtain iterate convergence for this family of algorithms.Based on these results, we propose an improved, practical algorithm ISQA + that switches to smooth optimization after the active manifold is presumably identified.We show that ISQA + is superior to existing PN-type methods as it possesses the same superlinear and even quadratic rates in iterations but has bounded per-iteration cost.ISQA + hence also converges superlinearly in running time, which, to our best knowledge, is the first of the kind.Our analysis is more general than existing ones in guaranteeing superlinear convergence in a broader class of degenerate problems.Numerical results also confirm ISQA + 's much improved efficiency over PN and PQN.

Related Work
ISQA for (1) or the special case of constrained optimization has been well-studied, and we refer the readers to [20] for a detailed review of related methods.We mention here in particular the works [7,22,26,47] that provided superlinear convergence results.Lee et al [22] first analyzed the superlinear convergence of PN and PQN.Their analysis considers only strongly convex f , so both the convergence of the iterate and the positive-definiteness of the Hessian are guaranteed.Their inexact version requires r t ↓ 0, which might not happen when the solutions to (2) are only approximate, as illustrated in Example 1.With the same requirement for r t as [22], Li et al [26] showed that superlinear convergence for inexact PN can be achieved when f is self-concordant.Byrd et al [7] focused on Ψ (•) = • 1 and showed superlinear convergence of PN under the subproblem stopping condition (10) defined in Section 2, which is achievable as long as p t is close enough to the optimum of (2).To cope with degenerate cases in which the Hessian is only positive semidefinite, Yue et al [47] used the stopping condition of [7] to propose a damping PN for general Ψ and showed that its iterates converge and achieve superlinear convergence under convexity and the error-bound (EB) condition [30] even if F is not coercive.A common drawback of [7,22,26,47] is that they all require increasing precisions in solving the subproblem, so the superlinear rate is observed only in iterations but not in running time in their experiments.In contrast, by switching to smooth optimization after identifying the active manifold, ISQA + achieves superlinear convergence not only in iterations but also in time, and is thus much more efficient in practice.Our superlinear convergence result also allows a broader range of degeneracy than that in [47].
Although ISQA is intensively studied, its ability for manifold identification is barely discussed because this does not in general hold, as noted in Example 1. Hare [14] showed that ISQA identifies the active manifold under the impractical assumptions that (2) is always solved exactly and the iterates converge, and his analysis cannot be extended to inexact versions.Our observation in [20,21,27] that ISQA identifies the active manifold empirically motivated this work to provide theoretical guarantees for this phenomenon.
Manifold identification requires the iterates, or at least a subsequence, to converge to a point of partial smoothness.In most existing analyses for (1), iterate convergence is proven under either: (i) f is convex and the algorithm is a first-order one, (ii) F is strongly convex, or (iii) the Kurdyka-Lojasiewicz (KL) condition holds.Analyses for the first scenario rely on the implicit regularization of first-order methods such that their iterates lie in a bounded region [28], but this is not applicable to ISQA.Under the second condition, convergence of the objective directly implies that of the iterates.For the third case, convergence of the full iterate sequence is usually proven under an assumption of a relative-error behavior, of the form for some b > 0, as done in [2,5,10], but this condition can easily be violated when inexactness kicks in in ISQA, as argued by Bonettini et al [5].To work around this issue, [5] further assumed that the forward-backward envelope [39] of F satisfies the KL condition and obtained iterate convergence under such a situation, but whether KL condition of F implies that of its forward-backward envelope is unclear.The only exception to get convergence under the KL condition of F for a specific type of SQA method is [47] that shows the convergence of the iterates for their specific algorithm under EB and convexity of f but [47] requires H t in (2) to be the Hessian of f plus a multiple of identity and their analysis cannot be extended to general H t .On the other hand, our analysis for iterate convergence is novel and more general in covering a much broader algorithmic framework and requiring only a general sharpness condition for F that contains both EB and the weak-sharp minima [6] as special cases.
Our design of the two-stage ISQA + is inspired by [23,27] in conjecturing that the active manifold is identified after the current manifold remains unchanged, but the design of the first stage is quite different and we also add in additional safeguards in the second stage.Lee and Wright [23] used dual averaging in the first stage for optimizing the expected value of an objective function involving random variables, so their algorithm is more suitable for stochastic settings.Li et al [27] focused on distributed optimization and their usage of manifold identification is for reducing the communication cost, instead of accelerating general regularized optimization considered in this work.

Outline
This work is outlined as follows.In Section 2, we describe the algorithmic framework and give preliminary properties.Technical results in Section 3 prove the manifold identification property of ISQA and the convergence of the iterates.We then describe the proposed ISQA + and show its superlinear convergence in running time in Section 4. The effectiveness of ISQA + is then illustrated through extensive numerical experiments in Section 5. Section 6 finally concludes this work.Our implementation of the described algorithms is available at https: //www.github.com/leepei/ISQA_plus/.

Preliminaries
We denote the minimum of (1) by F * ; the domain of Ψ by dom(Ψ ); and the set of convex, proper, and lower semicontinuous functions by Γ 0 .For any set C, relint(C) denotes its relative interior.We will frequently use the following notations.
The level set Lev (ξ) := {x | F (x) − F * ≤ ξ} for any ξ ≥ 0 is closed but not necessarily bounded.A function is Lsmooth if it is differentiable with the gradient L-Lipschitz continuous.We denote the identity operator by I.For self-adjoint linear endomorphisms A, B of E, A B ( ) means A − B is positive definite (positive semidefinite).We abbreviate A τ I to A τ for τ ∈ R. The set of A with A 0 is denoted by S ++ .The subdifferential ∂Ψ (x) of Ψ at x is well-defined as Ψ ∈ Γ 0 , hence so is the generalized gradient ∂F (x) = ∇f (x) + ∂Ψ (x).For any g ∈ Γ 0 , τ ≥ 0, and Λ ∈ S ++ , the proximal mapping prox Λ τ g (x) := arg min is continuous and finite in E even outside dom(g).When Λ = I, (5) is shorten to prox τ g (x).For (2), we denote its optimal solution by p t * .When there is no ambiguity, we abbreviate

Algorithmic Framework
We give out details in defining the ISQA framework by discussing the choice of H t , the subproblem solver and its stopping condition, and how sufficient objective decrease is ensured.We consider the algorithm a two-level loop procedure, where the outer loop updates the iterate x t and the iterations of the subproblem solver form the inner loop.
This condition is satisfied by all α t small enough as long as Q t (p t ) < 0 and Q t (•) is strongly convex; see [20,Lemma 3].
For the choice of H t , we only make the following blanket assumption without further specification to make our analysis more general.
For (2), any suitable solver for regularized optimization, such as (accelerated) proximal gradient, (variancereduced) stochastic gradient methods, and their variants, can be used, and the following are common for their inner loop termination: for some given t ≥ 0 and τ > 0, where The point pt τ in (11) is computed by taking a proximal gradient step of the subproblem (2) from p t , and thus pt − p t is the proximal gradient (with step size τ ) of Q t at p t .Because the subproblem is strongly convex, the norm of the proximal gradient is zero at p t if and only if p t is the unique solution to the subproblem.We will see in the next subsection that the norm of this proximal gradient squared is also equivalent to the objective distance to the optimum of the subproblem.We summarize this framework in Algorithm 1.

Basic Properties
Under (7), ( 3) is m-strongly convex with respect to p, so the following standard results hold for any τ ∈ (0, 1/M ] and any p t ∈ E [12,32]. Therefore, (10) and ( 8) are almost equivalent and implied by (9), while Example 1 has shown that ( 9) is a stronger condition not implied by (8).Although (13) does not show that (10) directly implies (8), once ( 10) is satisfied, we can use it to find pt τ satisfying (8) from p t .A central focus of this work is manifold identification, so we first formally define manifolds following [42].
x is surjective such that for y close enough to x, y ∈ M if and only if Φ(y) = 0. Through the implicit function theorem, we can also use a C k parameterization φ : R p → M, with φ(y) = x and the derivative injective at y, to describe a neighborhood of x on M. Now we are ready for the definition of partial smoothness [24] that we assume for the regularizer when discussing manifold identification.
Intuitively, it means Ψ is smooth around x * in M but changes drastically along directions leaving the manifold.We also call this M the active manifold.
As the original identification results in [15,Theorem 5.3] and [25,Theorem 4.10] require the sum F to be partly smooth but our setting does not require so for f , we first provide a result relaxing the conditions to ensure identification in our scenario.
Lemma 1 Consider (1) with f ∈ C 1 and Ψ convex, proper, closed, and partly smooth at a point x * relative to a C 2 -manifold M. If at x * , the nondegenerate condition holds, and there is a sequence {x t } converging to x * with F (x t ) → F (x * ), then dist 0, ∂F (x t ) → 0 ⇔ x t ∈ M for all t large.
Proof We first observe that as ).Moreover, convex functions are prox-regular everywhere.Therefore, the premises of [25,Theorem 4.10] on Ψ are satisfied.We then note that dist 0, ∂F ), so by ( 14), ( 15) is further equivalent to which is the necessary and sufficient condition for x t ∈ M for all t large in [25, Theorem 4.10] because (14) indicates that −∇f (x * ) ∈ relint(∂Ψ (x * )).We then apply that theorem to obtain the desired result.Here we note that for the requirements Using Lemma 1, we further state an identification result for (1) under our setting without the need to check whether {F (x t )} converges to F (x * ).This will be useful in our later theoretical development.
We note that the requirement of the above two lemmas is partial smoothness of Ψ , instead of F , at x * .Therefore, it is possible that 3 Manifold Identification of ISQA Our first major result is the manifold identification property of Algorithm 1.We start with showing that the strong condition (9) with t ↓ 0 is sufficient.
Theorem 1 Consider a point x * satisfying (14) with Ψ ∈ Γ 0 partly smooth at x * relative to some manifold M. Assume f is locally L-smooth for L > 0 around x * .If Algorithm 1 is run with the condition (9) and (7) holds, then there exist , δ > 0 such that x t − x * ≤ δ, t ≤ , and α t = 1 imply x t+1 ∈ M.
Proof Since each iteration of Algorithm 1 is independent of the previous ones, we abuse the notation to let x t be the input of Algorithm 1 at the tth iteration and p t the corresponding inexact solution to (2), but x t+1 is irrelevant to p t or α t .Assume for contradiction the statement is false.Then there exist a sequence {x t } ⊂ E converging to x * , a nonnegative sequence { t } converging to 0, a sequence {H t } ⊂ S ++ satisfying (7), and a sequence {p t } ⊂ E such that Q Ht (p t ; x t ) in (3) satisfies ( 9) for all t, yet x t + p t / ∈ M for all t.From (4), Therefore, we get from ( 16) that dist 0, From ( 7) and the convexity of Ψ , Q Ht (•; x t ) is m-strongly convex, which implies Combining ( 18) and (12) shows that Since {x t } converges to x * , by the argument in [43, Lemma 3.2], we get whenever x t is close enough to x * .Thus (19), (20), (9), and that t ↓ 0 imply Substituting ( 9) and ( 21) into (17) gives dist(0, ∂F (x t )) → 0, and by (21) we also get x t + p t → x * + 0 = x * .Therefore Lemma 2 implies that x t + p t ∈ M for all t large enough, proving the desired contradiction.
Theorem 1 shows that if a variant of PN or PQN needs r t ↓ 0 to achieve superlinear convergence, M will be identified in the middle, so one can reduce the running time by switching to smooth optimization that can be conducted more efficiently while possessing the same superlinear convergence.Moreover, although Theorem 1 shows that ( 9) is sufficient for identifying the active manifold, it might never be satisfied as Example 1 showed.We therefore provide another sufficient condition for ISQA to identify the active manifold that is satisfied by most of the widely-used solvers for (2), showing that ISQA essentially possesses the manifold identification property.This result uses the condition (8), which is weaker than (4), and we follow [20] to define { t } using a given η ∈ [0, 1): As argued in [20] and practically adopted in various implementations including [13,19,21], it is easy to ensure that (8) with (22) holds for some η < 1 under (7) (although the explicit value might be unknown) if we apply a linear-convergent subproblem solver with at least a pre-specified number of iterations to (2).
Theorem 2 Consider the setting of Theorem 1.If Algorithm 1 is run with (8) and (22) for some η ∈ [0, 1), (7) holds, and the update direction p t satisfies where s t satisfies s t ≤ R ( y t − (x t + p t * ) ) for some R : [0, ∞) → [0, ∞) continuous in its domain with R(0) = 0, Λ t ∈ S ++ with M 1 Λ t for M 1 > 0, and y t satisfies for some ν > 0 and η 1 ≥ 0, then there exists , δ > 0 such that Proof Suppose the statement is not true for contradiction.Then there exist a continuous function  22) and ( 23)- (24) hold, yet x t + p t / ∈ M for all t.We abuse the notation to let p t * and Q * t respectively denote the optimal solution and objective value for min p Q Ht (p; x t ), but x t+1 is irrelevant to p t or α t .
The optimality condition of (5) applied to (23) indicates that Thus, we have For the first two terms in (27), the triangle inequality and (18) imply For the last term in (27), we have from our definition of s t that By substituting ( 28)-( 30) back into (27), clearly there are Note that Q Ht (0; x t ) ≡ 0, so −Q * t ≥ 0 from its optimality and the right-hand side of ( 31) is well-defined.Next, we see from (25), (24), and the continuity of R that lim Applying ( 25) and ( 32) to (31) and letting t approach infinity then yield lim t→∞ dist 0, ∂F x t + p t = 0.
Next, from ( 28) and (25), it is also clear that p t → 0, so from the convergence of x t to x * we have Now ( 34) and ( 33) allow us to apply Lemma 2 so x t + p t ∈ M for all t large enough, leading to the desired contradiction.
The function R can be seen as a general residual function and we just need from it that s t approaches 0 with y t − (x t + p t * ) , and Theorem 2 can be used as long as we can show that such an R exists, even if the exact form is unknown.Condition (24) is deliberately chosen to exclude the objective Q t (y t − x t ) so that broader algorithmic choices like those with y t / ∈ dom(Ψ ) can be included.One concern for Theorem 2 is the requirement of |Q * t | ≤ .Fortunately, for Algorithm 1 with ( 8) and ( 22), if α t are lower-bounded by some ᾱ > 0 (which is true under (7) by [20,Corollary 1]) and F is lower-bounded, then (6) together with (8) and (22) shows that −Q * t is summable and thus decreasing to 0. We now provide several examples satisfying ( 23)- (24) to demonstrate the usefulness of Theorem 2. In our description below, p t,i denotes the ith iterate of the subproblem solver at the tth outer iteration and x t,i := x t + p t,i .
-Proximal gradient (PG): These methods generate the inner iterates by for some {λ t,i } bounded in a positive range that guarantees {Q t (p t,i )} i is a decreasing sequence for all t (decided through pre-specification, line search, or other mechanisms).Therefore, for any t, no matter what value of i is the last inner iteration, ( 23) is satisfied with y t = x t,i−1 , s t = 0, Λ t,i = λ t,i−1 I.The condition (22) holds for some η < 1 because proximal-gradient-type methods are a descent method with Q-linear convergence on strongly convex problems, and (24) holds for η 1 2η/m and ν = 1/2 from ( 18).-Accelerated proximal gradient (APG): The iterates are generated by x t,i = prox where κ(H t ) ≥ 1 is the condition number of H t .APG satisfies [32]: Since κ(H t ) ≥ 1, p t,i satisfies ( 8) with ( 22) for all i ≥ ln 2 κ(H t ).If such a p t,i is output as our p t , we see from ( 36) that (23) holds with s t = 0 and Λ t = H t I.The only condition to check is hence whether y t,i satisfies (24).The case of i = 1 holds trivially with η 1 = 2/m.For i > 1, (24) holds with η 1 = 3 2ηm −1 and ν = 1/2 because -Prox-SAGA/SVRG: These methods update the iterates by with x t,0 = x t , {λ t } bounded in a positive range, and {s t,i } are random variables converging to 0 as x t,i − x t approaches p t * .(For a detailed description, see, for example, [36].)It is shown in [45] that for prox-SVRG, Q t (x t,i −x t )−Q * t converges linearly to 0 with respect to i if λ t is small enough, so (8) with ( 22) is satisfied.A similar but more involved bound for prox-SAGA can be derived from the results in [11].When p t,i = x t,i −x t for some i > 0 is output as p t , we get y t = x t,i−1 , Λ t = λ t I, and s t = s t,i in (23), so the requirements of Theorem 2 hold.
If Ψ is block-separable and M decomposable into a product of submanifolds that conform with the blocks of Ψ , ( 23) can be modified easily to suit block-wise solvers like block-coordinate descent.This extension simply adapts the analysis above to the block-wise setting, so the proof is straightforward and omitted for brevity.

Iterate Convergence
Theorem 1 and Theorem 2 both indicate that for Algorithm 1 to identify M, we need x t (or at least a subsequence) to converge to a point x * of partial smoothness.We thus complement our analysis to show the convergence of the iterates under convexity of f and a local sharpness condition, which is a special case of the KL condition and is universal in real-world problems, without any additional requirement on the algorithm such as the relative-error condition in [2,4,5].In particular, we assume F satisfy the following for some ζ, ξ > 0, and θ ∈ (0, 1]: This becomes the well-known quadratic growth condition when θ = 1/2, and θ = 1 corresponds to the weaksharp minima [6].As discussed in Section 1.1, under convexity of f , [47] showed that the iterates of their PN variant converge to some x * ∈ Ω if F satisfies EB, which is equivalent to the quadratic growth condition in their setting [12].Our analysis allows broader choices of H t and (strong) iterate convergence is proven for θ ∈ (1/4, 1].
Theorem 3 Consider Algorithm 1 with any x 0 ∈ E. Assume that Ω = ∅, f is convex and L-smooth for L > 0, Ψ ∈ Γ 0 , (7) holds, there is η ∈ [0, 1) such that p t satisfies (8) with ( 22) for all t, and that (38) holds for ξ, ζ > 0 and θ ∈ (1/4, 1].Then x t → x * for some x * ∈ Ω. The convergence in Theorem 3 holds true in infinite dimensional real Hilbert spaces with strong convergence (which is indistinguishable from weak convergence in the finite-dimensional case), and the proof in Appendix B is written in this general scenario.The key of its proof is our following improved convergence rate, which might have its own interest.Except for that the case of θ = 1/2 has been proven by Peng et al [34] and that θ = 0 reduces to the general convex case analyzed in [20], this faster convergence rate is, up to our knowledge, new for ISQA.
Then there is ᾱ > 0 such that α t ≥ ᾱ for all t and the following hold.
1.For θ ∈ (1/2, 1]: When δ t is large enough such that we have Next, let t 0 be the first index failing (39), then for all t ≥ t 0 we have 2. For θ = 1/2, we have global Q-linear convergence of δ t in the form 3. For θ ∈ [0, 1/2), ( 41) takes place when δ t is large enough to satisfy (39).Let t 0 be the first index such that (39) fails, then For the range of θ in Theorem 3, convergence of the iterates generated by inexact SQA is a new result.Moreover, even if the additional conditions on the forward-backward envelope in [5] also hold, although their analysis also uses the subproblem stopping condition (8), they need a much stricter stopping tolerance of the form which clealy requires τ t to converge to 0 fast enough and thus costs much more time in the subproblem solve.
In contrast, we just need t to be a constant factor of Q * t in (22), so the number of inner iterations and thus the cost of subproblem solve can be a constant.

An Efficient Inexact SQA Method with Superlinear Convergence in Running Time
Now that it is clear ISQA is able to identify the active manifold, we utilize the fact that the optimization problem reduces to a smooth one after the manifold is identified to devise more efficient approaches, with safeguards to ensure that the correct manifold is really identified.The improved algorithm, ISQA + , is presented in Algorithm 2 and we explain the details below.
ISQA + has two stages, separated by the event of identifying the active manifold M of a cluster point x * .Our analysis showed that iterates converging to x * will eventually identify M, but since neither x * nor M is known a priori, the conjecture of identification can only be made when M remains unchanged for S > 0 iterations.
Most parts in the first stage are the same as Algorithm 1, although we have added specifications for the subproblem solver according to Theorem 2. The only major difference is that instead of linesearch, ISQA + adjusts H t and re-solve ( 2) whenever ( 6) with α t = 1 fails.This trust-region-like approach has guaranteed global convergence from [20] and ensures α t = 1 for Theorem 2 to be applicable.
Algorithm 2: ISQA + : An improved inexact successive quadratic approximation method utilizing manifold identification input : x 0 ∈ E, γ, β, ∈ (0, 1), S, T ∈ N, a subproblem solver A satisfying ( 23)-( 24) and is linear convergent for (2) Compute an upper bound L of the Lipschitz constant L for ∇f SmoothStep ← 0, Unchanged ← 0 for t = 0, 1, . . .do Identify M x t such that Ψ is partly smooth relative to M at x t if M remains the same from last iteration and M = ∅ then SmoothStep ← 0 Decide Ht and solve (2) using A with at least T iterations while (6) is not satisfied with αt = 1 do Enlarge Ht and resolve (2) using A with at least T iterations SmoothStep ← 0 and conduct a proximal gradient step In the second stage, we alternate between a standard proximal gradient (PG) step ( 44) and a manifold optimization (MO) step.PG is equivalent to solving (2) with H t = LI to optimality, so Theorem 2 applies.When M is not correctly identified, a PG step thus prevents us from sticking at a wrong manifold, while when the superlinear convergence phase of the MO step is reached, using PG instead of solving (2) with a sophisticated H t avoids redundant computation.
When the objective is partly smooth relative to a manifold M, optimizing it within M can be cast as a manifold optimization problem, and efficient algorithms for this type is abundant (see, for example, [1] for an overview).The difference between applying MO methods and sticking to (2) is that in the former, we can obtain the exact solution to the subproblem for generating the update direction in finite time, because the subproblem in the MO step is simply an unconstrained quadratic optimization problem whose solution can be found by solving a linear system, while in the latter it takes indefinitely long to compute the exact solution, so the former is preferred in practice for better running time.Although we did not assume that f is twice-differentiable, its generalized Hessian (denoted by ∇ 2 f ) exists everywhere since the gradient is Lipschitz-continuous [16].As discussed in Section 2, we can find a C 2 parameterization φ of M around x * , and we use this φ to describe a truncated semismooth Newton (TSSN) approach.Since M might change between iterations, when we are conducting MO at the tth iteration, we find a parameterization φ t of the current M and a point y t such that φ t (y t ) = x t .If M remains fixed, we also retain the same φ t .The TSSN step q t is then obtained by using preconditioned CG (PCG, see for example [33,Chapter 5]) to find an approximate solution for q t ≈ arg min q g t , q + 1 2 q, H t q , or equivalently H t q t ≈ −g t , that satisfies with pre-specified c > 0 and ρ ∈ (0, 1].We then run a backtracking line search procedure to find a suitable step size α t > 0. For achieving superlinear convergence, we should accept unit step size whenever possible, so we only require the objective not to increase.If q t is not a descent direction or α t is too small, we consider the MO step failed and go back to the first stage.If α t < 1, the superlinear convergence phase is not entered yet, and likely M has not been correctly identified, so we also switch back to the first stage.This algorithm is summarized in Algorithm 3. When products between ∇ 2 F (φ t (y t )) and arbitrary points, required by PCG, cannot be done easily, one can adopt Riemannian quasi-Newton approaches like [17] instead.

Global Convergence
This section provides global convergence guarantees for Algorithm 2. Because MO steps do not increase the objective value, global convergence of Algorithm 2 follows from the analysis in [20] by treating (44) as solving (2) with H t = LI and noting that this update always satisfies (6) with α t = 1.For completeness, we still state these results, and provide a proof in Appendix C. First, we restate a result in [20] to bound the number of steps spent in the while-loop for enlarging H t in Algorithm 2.
Lemma 3 ([20, Lemma 4]) Given an initial choice H 0 t for H t at the tth iteration of Algorithm 2 (so initially we start with H t = H 0 t and modify it when (6) fails with α t = 1) and a parameter β ∈ (0, 1).Consider the following two variants for enlarging H t , starting with σ = 1: We then have the following bounds if every time the approximate solution to (2) always satisfies Q Ht (p t ; x t ) ≤ 0.
) satisfies H t ≤ M max {1, L/(βm)}, and the while-loop terminates within log β −1 L/m rounds.2. If H 0 t 0, then the final H t from (Variant 2) satisfies H t ≤ M + max {1, L/β} and the while-loops terminates within 1 + log β −1 L rounds.Now we provide global convergence guarantees for Algorithm 2 without the need of manifold identification.From Lemma 3, we can simply assume without loss of generality that (7) holds for the final H t that leads to the final update direction that satisfies (6).
Theorem 5 Consider (1) with f L-smooth for L > 0, Ψ ∈ Γ 0 , and Ω = ∅.Assume Algorithm 2 is applied with an initial point x 0 , the estimate L satisfies L ≥ L, (7) holds for the final H t after exiting the while-loop, and (8) is satisfied with (22) for some η ∈ [0, 1) fixed over t.Let {k t } be the iterations that the MO step is not attempted (so either (2) is solved approximately or (44) is conducted), then we have k t ≤ 2t for all t.By denoting M := max L, M , m := min L, m , we have the following convergence rate guarantees.
1. Let G t := arg min p Q I (p; x t ), then G kt → 0, and for all t ≥ 0, we have Moreover, G t = 0 if and only if 0 ∈ ∂F (x t ), and therefore any limit point of {x kt } is a stationary point of (1). 2. If in addition f is convex and there exists R 0 ∈ [0, ∞) such that (in other words, (38) holds with θ = 0, ζ = R −1 0 , and ξ = F (x 0 ) − F * ), then: Moreover, we have In summary, we have F (x t ) − F * = O t −1 .3. The results of Theorem 4 hold, with M and H t replaced by M , δ t by δ kt , α t and ᾱ by 1, δ t by δ kt , δ t+1 by δ kt+1 , and δ t0 by δ kt 0 .

Superlinear and Quadratic Convergence
Following the argument in the previous subsection to treat (44) as solving (2) exactly, the manifold identification property of (44) also follows from Theorem 2. We thus focus on its local convergence in this subsection.In what follows, we will show that x t converges to a stationary point x * satisfying ( 14) superlinearly or even quadratically in the second stage.
Let M be the active manifold of x * and φ be a parameterization of M with φ(y * ) = x * for some point y * .We can thus assume without loss of generality φ t = φ for all t that identified M. We denote F φ (y) := F (φ(y)).For simplicity, we assume that F φ is twice-differentiable with its Hessian locally Lipschitz continuous around y * .In particular, we just need the following property to hold locally in a neighborhood U 0 of y * : We do not assume ∇2 F φ (y * ) 0 like existing analyses for Newton's method, but consider a degenerate case in which there is a neighborhood U 1 of y * such that Note that (49) implies that F φ is convex within U 1 .We can decompose E into the direct sum of the tangent and the normal spaces of M at x * , and thus its stationarity implies ∇F φ (y * ) = 0.This and (49) mean y * is a local optimum of F φ , and hence x * is a local minimum of F when f is L-smooth, following the argument of [43,Theorem 2.5].We also assume that F φ satisfies a sharpness condition similar to (38) in a neighborhood U 2 of y * : for some ζ > 0 and θ ∈ (0, 1/2]. 1 By shrinking the neighborhoods if necessary, we assume without loss of generality that U 0 =U 1 = U 2 and denote it by U .Note that the conventional assumption of positive-definite Hessian at y * is a special case that satisfies (49) and (50) with θ = 1/2.We define d t := y t − y * and use it to bound y t + q t and ∇F φ (y t + q t ).
Lemma 4 Consider a stationary point x * of (1) with Ψ partly smooth at it relative to a manifold M with a parameterization φ and a point y * such that φ(y * ) = x * , and assume that within a neighborhood U of y * , F ψ is twice-differentiable with (48) and (49) hold.Then y t ∈ U implies that any q t satisfying (46) is bounded by Proof From (46), we can find ψ t ∈ E such that From ( 49) and ( 45 From the triangle inequality, we have q t ≤ y t − y * + y t + q t − y * , whose combination with (54) proves (51).
Lemma 5 Consider the setting of Lemma 4 and further assume that Ψ ∈ Γ 0 and f is L-smooth.The following hold.
Now we are able to show two-step superlinear convergence of x − x * .
Theorem 6 Consider the setting of Lemma 5 and assume in addition that x * satisfies (14).Then there is a neighborhood V of x * such that if at the t 0 th iteration of Algorithm 2 for some t 0 > 0 we have that x t0 ∈ V , Unchanged ≥ S, M is correctly identified with parameterization φ and φ(y * ) = x * , and α t = 1 is taken in Algorithm 3 for all t ≥ t 0 , we get the following for all t ≥ t 0 .
1.For ρ ∈ (0, 1] and F φ satisfying (50) with θ = 1/2 for some ζ > 0: 2. For ρ = 0.69 and F φ satisfying (50) for some ζ > 0 and θ ≥ 3/8, Proof In our discussion below, V i and U i for i ∈ N are respectively neighborhoods of x * and y * .Since φ is C 2 , there is U 1 of y * such that Because the derivative of φ at y * is injective, (64) implies If the tth iteration is a TSSN step, we define y t to be the point such that φ(y t ) = x t .If either case in Lemma 5 holds and q t satisfies (46), from that y t + q t − y * = o( y t − y * ) we can find U 2 ⊂ U such that y t ∈ U 2 implies y t +q t ∈ U 2 .Take U 3 := U 1 ∩U 2 ⊂ U , for y t ∈ U 3 and x t+1 = φ(y t +q t ), we get y t +q t ∈ U 3 ⊂ U 1 , and hence the following from (65).
On the other hand, consider the case in which the tth iteration is a PG step.As φ is C 2 and φ(y so there is Therefore, from Theorem 2 (applicable because we have assumed ( 14)), there is V 4 such that x t ∈ V 4 implies x t+1 ∈ M. Take V 5 := V 4 ∩ V 3 , then x t ∈ V 5 implies x t+1 ∈ V 1 ∩ M, thus we can find y t+1 ∈ U 3 with φ(y t+1 ) = x t+1 .Now consider the first case in the statement.If at the tth iteration we have x t ∈ V 5 and have taken (44), then x t+1 ∈ V 1 ∩ M with x t+1 = φ(y t+1 ) for some y t+1 ∈ U 3 , so we can take a TSSN step at the (t + 1)th iteration and so there is V 6 ⊂ V 5 such that x t ∈ V 6 implies x t+2 ∈ V 6 as well and the superlinear convergence in (68) propagates to t + 2, t + 4, . . . .We therefore see that matter we take PG or TSSN first, proving the first equation in (63).The convergence of ∇F φ then follows from (57) and (58).The superlinear convergence in the second case follows the same argument.
Note that when ρ = 1 in the first case, we obtain quadratic convergence.
The analysis in [47] assumed directly (58) instead of (50), together with a Lipschitzian Hessian for f , under the setting of regularized optimization to get a superlinear rate.In the context of smooth optimization, our analysis is more general in giving a wider range for superlinear convergence.In particular, for (58), [47] only allowed θ = 1, where θ:= θ/(1 − θ), whereas our result extends the range of superlinear convergence to θ ≥ 0.6.
Remark 1 PCG returns the exact solution of (45) in d steps, where d is the dimension of M, and each step involves only a bounded number of basic linear algebra operations, so the running time of Algorithm 3 is upper bounded.Therefore, superlinear convergence of Algorithm 2 in terms of iterations, from Theorem 6, implies that in terms of running time as well.This contrasts with existing PN approaches, as they all require applying an iterative subproblem solver to (2) with increasing precision, which also takes increasing time per iteration because (2) has no closed-form solution.

Numerical Results
We conduct numerical experiments on 1 -regularized logistic regression to support our theory, which is of the form in ( 1) with E = R d for some d ∈ N, where λ > 0 decides the weight of the regularization and (a i , b i ) ∈ R d × {−1, 1} for i = 1, . . ., n are the data points.Note that λ x 1 is partly smooth at every x ∈ R d relative to M x := {y | y i = 0, ∀i ∈ J x }, where J x := {i | x i = 0}.Let I be the identity matrix, and J C x := {1, . . ., d} \ J x , then viewing from the definition of M x here, the parameterization we use at each iteration is simply projecting y We use public available real-world data sets listed in Table 1. 2 All experiments are conducted with λ = 1 in (69).All methods are implemented in C++, and we set γ = 10 −4 , β = 0.5, T = 5 throughout for ISQA and ISQA + .

Manifold Identification of Different Subproblem Solvers
We start with examining the ability for manifold identification of different subproblem solvers.We run both ISQA and the first stage of ISQA + (by setting S = ∞) and consider two settings for H t .The first is the L-BFGS approximation with a safeguard in [21], and we set m = 10 and δ = 10 −10 in their notation following their experimental setting.The second is a PN approach in [47] that uses and we set ρ = 0.5, c = 10 −6 following their suggestion.In both cases, we enlarge H t in Algorithm 2 through H t ← 2H t .We compare the following subproblem solvers.
-SpaRSA [44]: a backtracking PG-type method with the initial step sizes estimated by the Barzilai-Borwein method.
-Random-permutation cyclic coordinate descent (RPCD): Cyclic proximal coordinate descent with the order of coordinates reshuffled every epoch.
The results presented in Table 2 show that all subproblem solvers for ISQA + can identify the active manifold, verifying Theorem 2. Because the step sizes are mostly one in this experiment, even solvers for ISQA can identify the active manifold.Among the solvers, RPCD is the most efficient and stable in identifying the active manifold, so we stick to it in subsequent experiments.

Comparing ISQA + with Existing Algorithms
We proceed to compare ISQA + with the following state of the art for (1) using the relative objective value: (F (x) − F * )/F * .
Identical to our L-BFGS variant, we set m = 10 for constructing H t in this experiment.
IRPN [47] is another PN method that performs slightly faster than NewGLMNET, but their algorithmic frameworks are similar and the experiment in [47] showed that the running time of NewGLMNET is competitive.We thus use NewGLMNET as the representative because its code is open-sourced.
For ISQA + , we set S = 10 and use both PN and L-BFGS variants with RPCD in the first stage and Algorithm 3 with ρ = 0.5, c = 10 −6 in the second.For PCG, we use the diagonals of H t as the preconditioner.We use a heuristic to let PCG start with an iteration bound T 0 = 5, double it whenever α t = 1 until reaching the dimension of M, and reset it to 5 when α t < 1.For the value of S, although tuning it properly might lead to even better performance, we observe that the current setting already suffices to demonstrate the improved performance of the proposed algorithm.
Results in Fig. 1 show the superlinear convergence in running time of ISQA + , while LHAC and NewGLMNET only exhibit linear convergence.We observe that for data with n d, including a9a, ijcnn1, covtype.scale,and epsilon, L-BFGS approaches are faster because H t p can be evaluated cheaply (LHAC failed on covtype.scaledue to implementation issues), and PN approaches are faster otherwise, so no algorithm is always superior.Nonetheless, for the same type of H t , ISQA + -LBFGS and ISQA + -Newton respectively improve state-of-the-art algorithms LHAC and NewGLMNET greatly because of the fast local convergence, especially when the base method converges slowly.

Conclusions
In this paper, we showed that for regularized problems with a partly smooth regularizer, inexact successive quadratic approximation is essentially able to identify the active manifold because a mild sufficient condition is satisfied by most of commonly-used subproblem solvers.An efficient algorithm ISQA + utilizing this property is proposed to attain superlinear convergence on a wide class of degenerate problems in running time, greatly improving upon state of the art for regularized problems that only exhibit superlinear convergence in outer iterations.Numerical evidence illustrated that ISQA + significantly improves the running time of state of the art for regularized optimization.[20,Lemma 7].Thus, the claim of stationarity of the limit points follow directly from the claim that G k t → 0. Now let us prove that G k t → 0 and the upper bound for min 0≤k≤t G k t 2 .For the k i th iteration, we get from (6), and that (22) indicates that Qk i < 0, that By applying (22) and ( 8) to (84), we further get By summing (84) over i = 0, . . ., t, we get where the penultimate inequality is from that F (x t ) is nonincreasing.Following [20, Lemma 7] and [41, Lemma 3], we then have from (83) that Since the above is true for any t ≥ 0, it shows that { G k t 2 } is summable, so G k t 2 converges to 0, which implies that G k t → 0. The rate is then obtained from (86) by noting that 44) else Manifold optimization: try to find x t+1 with F (x t+1 ) ≤ F (x t ) by -Variant I: truncated Newton: Algorithm 3 -Variant II: Riemannian quasi-Newton if Manifold optimization fails then x t+1 ← x t , Unchanged ← 0 else SmoothStep ← 1 a sequence {H t } ⊂ S ++ satisfying (7) and lim Ht p; x t = 0, (25) three sequences {p t }, {y t }, {s t } ⊂ E and a sequence {Λ t } ⊂ S ++ with M 1 Λ t such that (8) with ( t→∞ min p Q + g t − ∇ 2 F φ t y t d t + µ t d t

Table 2 :
Outer iterations and time (seconds) for different subproblem solvers to identify the active manifold.For each ISQA + variant, the fastest running time among all subproblem solvers is boldfaced.The part of Gt = 0 if and only if 0 ∈ ∂F (x t ) is well-known.See, for example,