New Results on Superlinear Convergence of Classical Quasi-Newton Methods

We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these methods. In particular, we show that the corresponding rate of the Broyden–Fletcher–Goldfarb–Shanno method depends only on the product of the dimensionality of the problem and the logarithm of its condition number.


Introduction
We study local superlinear convergence of classical quasi-Newton methods for smooth unconstrained optimization. These algorithms can be seen as an approximation of the standard Newton method, in which the exact Hessian is replaced by some operator, which is updated in iterations by using the gradients of the objective function. The two most famous examples of quasi-Newton algorithms are the Davidon-Fletcher-Powell (DFP) [1,2] and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) [3][4][5][6][7] methods, which together belong to the Broyden family [8] of quasi-Newton algorithms. For  an introduction into the topic, see [9] and [10,Chapter 6]. See also [11] for the discussion of quasi-Newton algorithms in the context of nonsmooth optimization.
However, explicit rates of superlinear convergence for quasi-Newton algorithms were obtained only recently. The first results were presented in [26] for the greedy quasi-Newton methods. After that, in [27], the classical quasi-Newton methods were considered, for which the authors established certain superlinear convergence rates, depending on the problem dimension and its condition number. The analysis was based on the trace potential function, which was then augmented by the logarithm of determinant of the inverse Hessian approximation to extend the proof onto the general nonlinear case.
In this paper, we further improve the results of [27]. For the classical quasi-Newton methods, we obtain new convergence rate estimates, which have better dependency on the condition number of the problem. In particular, we show that the superlinear convergence rate of BFGS depends on the condition number only through the logarithm. As compared to the previous work, the main difference in the analysis is the choice of the potential function: now the main part is formed by the logarithm of determinant of Hessian approximation, which is then augmented by the trace of inverse Hessian approximation.
It is worth noting that recently, in [28], another analysis of local superlinear convergence of the classical DFP and BFGS methods was presented with the resulting rate, which is independent of the dimensionality of the problem and its condition number. However, to obtain such a rate, the authors had to make an additional assumption that the methods start from a sufficiently good initial Hessian approximation. Without this assumption, to our knowledge, their proof technique, based on the Frobenius-norm potential function, leads only to the rates, which are weaker than those in [27]. This paper is organized as follows. In Sect. 2, we introduce our notation. In Sect. 3, we study the convex Broyden class of quasi-Newton updates for approximating a self-adjoint positive definite operator. In Sect. 4, we analyze the rate of convergence of the classical quasi-Newton methods from the convex Broyden class as applied to minimizing a quadratic function. On this simple example, where the Hessian is constant, we illustrate the main ideas of our analysis. In Sect. 5, we consider the general unconstrained optimization problem. Finally, in Sect. 6, we discuss why the new superlinear convergence rates, obtained in this paper, are better than the previously known ones.

Notation
In what follows, E denotes an n-dimensional real vector space. Its dual space, composed of all linear functionals on E, is denoted by E * . The value of a linear function s ∈ E * , evaluated at a point x ∈ E, is denoted by s, x .
For a smooth function f : E → R, we denote by ∇ f (x) and ∇ 2 f (x) its gradient and Hessian, respectively, evaluated at a point x ∈ E. Note that ∇ f (x) ∈ E * , and The partial ordering of self-adjoint linear operators is defined in the standard way.
Any self-adjoint positive definite linear operator A : E → E * induces in the spaces E and E * the following pair of conjugate Euclidean norms: where f : E → R is a smooth function with positive definite Hessian, and x ∈ E, we prefer to use notation · x and · * x , provided that there is no ambiguity with the reference function f . Sometimes, in the formulas, involving products of linear operators, it is convenient to treat x ∈ E as a linear operator from R to E, defined by xα = αx, and x * as a linear operator from E * to R, defined by x * s = s, x . Likewise, any s ∈ E * can be treated as a linear operator from R to E * , defined by sα = αs, and s * as a linear operator from E to R, defined by s * x = s, x . In this case, x x * and ss * are rank-one self-adjoint linear operators from E * to E and from E * to E, respectively, acting as follows: Given two self-adjoint linear operators A : E → E * and H : E * → E, we define the trace and the determinant of A with respect to H as follows: H , A := Tr(H A), and Det(H , A) := Det(H A). Note that H A is a linear operator from E to itself, and hence, its trace and determinant are well defined by the eigenvalues (they coincide with the trace and determinant of the matrix representation of H A with respect to an arbitrary chosen basis in the space E, and the result is independent of the particular choice of the basis). In particular, if H is positive definite, then H , A and Det(H , A) are, respectively, the sum and the product of the eigenvalues of A relative to H −1 . Observe that ·, · is a bilinear form, and for any x ∈ E, we have Ax, x = x x * , A . When A is invertible, we also have A −1 , A = n and Det(A −1 , δ A) = δ n for any δ ∈ R. Also recall the following multiplicative formula for the determinant: Det(H , A) = Det(H , G) · Det(G −1 , A), which is valid for any invertible linear operator G : E → E * . If the operator H is positive semidefinite, and

Convex Broyden Class
Let A and G be two self-adjoint positive definite linear operators from E to E * , where A is the target operator, which we want to approximate, and G is its current approximation. The Broyden class of quasi-Newton updates of G with respect to A along a direction u ∈ E \ {0} is the following family of updating formulas, parameterized by a scalar τ ∈ R: where ( If the denominator in (3) is zero, we left both φ τ and Broyd τ (A, G, u) undefined. For the sake of convenience, we also set Broyd τ (A, G, u) = G for u = 0.

Remark 3.1
Usually the Broyden class is defined directly in terms of the parameter φ. However, in the context of this paper, it is more convenient to work with τ instead of φ. As can be seen from (66), τ is exactly the weight of the DFP component in the updating formula for the inverse operator.
A basic property of an update from the convex Broyden class is that it preserves the bounds on the eigenvalues with respect to the target operator. Consider the measure of closeness of G to A along direction u ∈ E \ {0}: Let us present two potential functions, whose improvement after one update from the convex Broyden class can be bounded from below by a certain nonnegative monotonically increasing function of ν, vanishing at zero. First, consider the log-det barrier It will be useful when A G. Note that in this case V (A, G) ≥ 0.
Proof Indeed, denoting G + := Broyd τ (A, G, u), we obtain Therefore, denoting ν := ν(A, G, u), we can write that Au, u (4) = ν 2 , Substituting the above two inequalities into (6), we obtain the claim. Now consider another potential function, the augmented log-det barrier: As compared to the log-det barrier, this potential function is more universal since it works even if the condition A G is violated. Note that the augmented log-det barrier is in fact the Bregman divergence, generated by the strictly convex function d(A) := − ln Det(B −1 , A), defined on the set of self-adjoint positive definite linear operators from E to E * , where B : E → E * is an arbitrary fixed self-adjoint positive definite linear operator. Indeed, Remark 3. 2 The idea of combining the trace with the logarithm of determinant to form a potential function for the analysis of quasi-Newton methods can be traced back to [29]. Note also that in [27], the authors studied the evolution of ψ(A, G), i.e. the Bregman divergence was centered at A instead of G.

Lemma 3.3 For any real
Proof We only need to prove the first inequality in (10) since the second one follows from it and the fact that ).
It turns out that, up to some constants, the improvement in the augmented log-det barrier can be bounded from below by exactly the same logarithmic function of ν, which was used for the simple log-det barrier.
Proof Indeed, denoting G + := Broyd τ (A, G, u), we obtain Thus, where we denote α 1 : Note that α 1 ≥ β 1 and α 0 ≥ β 0 by the Cauchy-Schwartz inequality. At the same time, τβ 1 + (1 − τ )β 2 ≥ β by the convexity of the inverse function t → t −1 . Hence, we can apply Lemma 3.3 to estimate (11) from below. Note that The measure ν(A, G, u), defined in (4), is the ratio of the norm of (G − A)u, measured with respect to G, and the norm of u, measured with respect to A. It is important that we can change the corresponding metrics to G + and G, respectively, by paying only with the minimal eigenvalue of G relative to A.

Unconstrained Quadratic Minimization
Let us study the convergence properties of the classical quasi-Newton methods from the convex Broyden class, as applied to minimizing the quadratic function where A : E → E * is a self-adjoint positive definite linear operator, and b ∈ E * . Let B : E → E * be a fixed self-adjoint positive definite linear operator, and let μ, L > 0 be such that μB A L B.
Thus, μ is the strong convexity parameter of f , and L is the constant of Lipschitz continuity of the gradient of f , both measured relative to B. Consider the following standard quasi-Newton process for minimizing (16): For k ≥ 0 iterate: For measuring its rate of convergence, we use the norm of the gradient, taken with respect to the Hessian: It is known that the process (18) has at least a linear convergence rate of the standard gradient method: (18), for all k ≥ 0: Let us establish the superlinear convergence. According to (19), for the quadratic function, we have A G k for all k ≥ 0. Therefore, in our analysis, we can use both potential functions: the log-det barrier and the augmented log-det barrier. Let us consider both options. We start with the first one. (18), for all k ≥ 1, we have

Theorem 4.2 In scheme
Proof Without loss of generality, we can assume that u i = 0 for all 0 ≤ i ≤ k.

Remark 4.1 As can be seen from
. . , λ n are the eigenvalues of A relative to B. This improved factor can be significantly smaller than the original one if the majority of the eigenvalues λ i are much larger than μ.
Let us briefly present another approach, which is based on the augmented log-det barrier. The resulting efficiency estimate will be the same as in Theorem 4.2 up to a slightly worse absolute constant under the exponent. However, this proof can be extended onto general nonlinear functions. (18), for all k ≥ 1, we have

Theorem 4.3 In scheme
Proof Without loss of generality, we can assume that u i = 0 for all 0 ≤ i ≤ k. Denote Lemma 3.4 and (19), for all 0 ≤ i ≤ k − 1, we have 6 13 ≤ ψ 0 = ψ(L B, A) ≤ n ln L μ , and we can continue exactly as in the proof of Theorem 4.2.

Minimization of General Functions
In this section, we consider the general unconstrained minimization problem: where f : E → R is a twice continuously differentiable function with positive definite second derivative. Our goal is to study the convergence properties of the following standard quasi-Newton scheme for solving (24): For k ≥ 0 iterate: Here, B : E → E * is a self-adjoint positive definite linear operator, and L is a positive constant, which together define the initial Hessian approximation G 0 . We assume that there exist constants μ > 0 and M ≥ 0, such that for all x, y, z, w ∈ E. The first assumption (26) specifies that, relative to the operator B, the objective function f is μ-strongly convex and its gradient is L-Lipschitz continuous. The second assumption (27) Note that for a quadratic function, we have M = 0. For measuring the convergence rate of (25), we use the local gradient norm: The local convergence analysis of the scheme (25) is, in general, the same as the corresponding analysis in the quadratic case. However, it is much more technical due to the fact that, in the nonlinear case, the Hessian is no longer constant. This causes a few problems.
First, there are now several different ways how one can treat the Hessian approximation G k . One can view it as an approximation to the Hessian ∇ 2 f (x k ) at the current iterate x k , to the Hessian ∇ 2 f (x * ) at the minimizer x * , to the integral Hessian J k , etc. Of course, locally, due to strong self-concordancy, all these variants are equivalent since the corresponding Hessians are close to each other. Nevertheless, from the viewpoint of technical simplicity of the analysis, some options are slightly more preferable than others. We find it to be the most convenient to always think of G k as an approximation to the integral Hessian J k .
The second issue is as follows. Suppose we already know what is the connection between our current Hessian approximation G k and the actual integral Hessian J k , e.g., in terms of the relative eigenvalues and the value of the augmented log-det barrier potential function (8). Naturally, we want to know how these quantities change after we update G k into G k+1 at Step 4 of the scheme (25). For this, we apply Lemma 3.1 and Lemma 3.4, respectively. However, the problem is that both of these lemmas will provide us only with the information on the connection between the update result G k+1 and the current integral Hessian J k (which was used for performing the update), not the next one J k+1 . Therefore, we need to additionally take into account the errors, resulting from approximating J k+1 by J k .
For estimating the errors, which accumulate as a result of approximating one Hessian by another, it is convenient to introduce the following quantities 2 :

Remark 5.1
The general framework of our analysis is the same as in the previous paper [27]. The main difference is that now another potential function is used for establishing the rate of superlinear convergence (Lemma 5.4). However, in order to properly incorporate the new potential function into the analysis, many parts in the proof had to be appropriately modified, most notably the part, related to estimating the region of local convergence. In any case, the analysis, presented below, is fully self-contained and does not require the reader first go through [27].
We analyze the method (25) in several steps. The first step is to establish the bounds on the relative eigenvalues of the Hessian approximations with respect to the corresponding Hessians.

Lemma 5.2 For all k
Proof For k = 0, (32) follows from (26) and the fact that G 0 = L B and ξ 0 = 1. Now suppose that k ≥ 0, and that (32) has already been proved for all indices up to k. Then, applying Lemma 5.1 to (32), we obtain Since (1+ Mr k 2 )ξ k ≤ ξ k+1 by (31), this proves (33) for the index k. Applying Lemma 3.1 to (34), we get This proves (32) for the index k + 1, and we can continue by induction.

Corollary 5.1
For all k ≥ 0, we have The second step in our analysis is to establish a preliminary version of the linear convergence theorem for the scheme (25).

Lemma 5.3 For all k ≥ 0, we have
where Proof Let k, i ≥ 0 be arbitrary. By Taylor's formula, we have Hence, λ i+1 Thus, λ i+1 ≤ 1 + Mr i 2 q i λ i in view of (39) and (30). Consequently, Next, we establish a preliminary version of the theorem on superlinear convergence of the scheme (25). The proof uses the augmented log-det barrier potential function and is essentially a generalization of the corresponding proof of Theorem 4.3.
By the convexity of function t → ln(1 + e t ), it follows that 13 6 n k ln ξ ξ k+1 k+1 At the same time, Consequently, 13 6 n k ln(ξ In the quadratic case (M = 0), we have ξ k ≡ 1 (see (31)), and Lemmas 5.2 and 5.3 reduce to the already known Theorem 4.1, and Lemma 5.4 reduces to the already known Theorem 4.2. In the general case, the quantities ξ k can grow with iterations. However, as we will see in a moment, by requiring the initial point x 0 in the scheme (25) to be sufficiently close to the solution, we can still ensure that ξ k stay uniformly bounded by a sufficiently small absolute constant. This allows us to recover all the main results of the quadratic case.
To write down the region of local convergence of (25), we need to introduce one more quantity, related to the starting moment of superlinear convergence 3 : For DFP (τ k ≡ 1) and BFGS (τ k ≡ 0), we have, respectively, Now we are ready to prove the main result of this section. (25), we have

Theorem 5.1 Suppose that, in scheme
Then, for all k ≥ 0, and, for all k ≥ 1, Proof Let us prove by induction that, for all k ≥ 0, we have 3 Hereinafter, t for t > 0 denotes the smallest positive integer greater or equal to t.

Remark 5.2
In accordance with Theorem 5.1, the parameter M of strong selfconcordancy affects only the size of the region of local convergence of the process (25), and not its rate of convergence. We do not know whether this is an artifact of the analysis or not, but it might be an interesting topic for future research. For a quadratic function, we have M = 0, and so the scheme (25) is globally convergent.
The region of local convergence, specified by (47), depends on the maximum of two quantities: μ L and 1 K 0 . For DFP, the 1 K 0 part in this maximum is in fact redundant, and its region of local convergence is simply inversely proportional to the condition number: O μ L . However, for BFGS, the 1 K 0 part does not disappear, and we obtain the following region of local convergence: Clearly, the latter region can be much bigger than the former when the condition number L μ is significantly larger than the dimension n.

Remark 5.3
The previous estimate of the size of the region of local convergence, established in [27], was O( μ L ) for both DFP and BFGS.

Discussion
Let us compare the new convergence rates, obtained in this paper for the classical DFP and BFGS methods, with the previously known ones from [27]. Since the estimates for the general nonlinear case differ from those for the quadratic one just in absolute constants, we only discuss the latter case.
In what follows, we use our standard notation: n is the dimension of the space, μ is the strong convexity parameter, L is the Lipschitz constant of the gradient, and λ k is the local norm of the gradient at the kth iteration.
For BFGS, the previously known rate (see [27,Theorem 3.2]) is Although (56) is formally valid for all k ≥ 1, it becomes useful 6 only after iterations. Thus, K BFGS 0 can be thought of as the starting moment of the superlinear convergence, according to the estimate (56).
In this paper, we have obtained a new estimate (Theorem 4.2): Its starting moment of superlinear convergence can be described as follows: Indeed, since e t ≤ 1 1−t = 1 + t 1−t for any t < 1, we have, for all k ≥ K BFGS 0 , e n k ln L Comparing the previously known efficiency estimate (56) and its starting moment of superlinear convergence (57) with the new ones (62), (59), we thus conclude that we manage to put the condition number L μ under the logarithm. For DFP, the previously known rate (see [27,Theorem 3.2]) is with the following starting moment of the superlinear convergence: The new rate, which we have obtained in this paper (Theorem 4.2), is Repeating the same reasoning as above, we can easily obtain that the new starting moment of the superlinear convergence can be described as follows: and, for all k ≥ K DFP 0 , the new estimate (64) takes the following form: ≤ λ 0 ).
Thus, compared to the old result, we have improved the factor L 2 μ 2 up to L μ ln L μ . Interestingly enough, the ratio between the old starting moments (63), (57) of the superlinear convergence of DFP and BFGS and the new ones (65), (59) have remained the same, L μ , although the both estimates have been improved.
It is also interesting whether the results, obtained in this paper, can be applied to limited-memory quasi-Newton methods such as L-BFGS [30]. Unfortunately, it seems like the answer is negative. The main problem is that we cannot say anything interesting about just a few iterations of BFGS. Indeed, according to our main result, after k iterations of BFGS, the initial residual is contracted by the factor of the form [exp( n k ln L μ ) − 1] k . For all values k ≤ n ln L μ , this contraction factor is in fact bigger than 1, so the result becomes useless.

Conclusions
We have presented a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. Our analysis has been based on the potential function involving the logarithm of determinant of Hessian approximation and the trace of inverse Hessian approximation. Compared to the previous works, we have obtained new convergence rate estimates, which have much better dependency on the condition number of the problem.
Note that all our results are local, i.e. they are valid under the assumption that the starting point is sufficiently close to a minimizer. In particular, there is no contradiction between our results and the fact that the DFP method is not known to be globally convergent with inexact line search (see, e.g., [31]).
Let us mention several open questions. First, looking at the starting moment of superlinear convergence of the BFGS method, in addition to the dimension of the problem, we see the presence of the logarithm of its condition number. Although typically such logarithmic factors are considered small, it is still interesting to understand whether this factor can be completely removed.
Second, all the superlinear convergence rates, which we have obtained for the convex Broyden class in this paper, are expressed in terms of the parameter τ , which controls the weight of the DFP component in the updating formula for the inverse operator. At the same time, in [27], the corresponding estimates were presented in terms of the parameter φ, which controls the weight of the DFP component in the updating formula for the primal operator. Of course, for the extreme members of the convex Broyden class, DFP and BFGS, φ and τ coincide. However, in general, they could be quite different. We do not know if it is possible to express the results of this paper in terms of φ instead of τ .
Finally, in all the methods, which we considered, the initial Hessian approximation G 0 was L B, where L is the Lipschitz constant of the gradient, measured relative to the operator B. We always assume that this constant is known. Of course, it is interesting to develop some adaptive algorithms, which could start from any initial guess L 0 for the constant L, and then somehow dynamically adjust the Hessian approximations in iterations, yet retaining all the original efficiency estimates. appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.