A Note on the Optimal Convergence Rate of Descent Methods with Fixed Step Sizes for Smooth Strongly Convex Functions

Based on a result by Taylor et al. (J Optim Theory Appl 178(2):455–476, 2018) on the attainable convergence rate of gradient descent for smooth and strongly convex functions in terms of function values, an elementary convergence analysis for general descent methods with fixed step sizes is presented. It covers general variable metric methods, gradient-related search directions under angle and scaling conditions, as well as inexact gradient methods. In all cases, optimal rates are obtained.


Introduction
An L-smooth and µ-strongly convex function f : R n → R is characterized by the two properties for some constants 0 < µ ≤ L and all x, y ∈ R n .Here, , can be any inner product on R n with corresponding norm • , and ∇f denotes the gradient with respect to this inner product.Note that the constants µ and L depend on the chosen inner product.The class of such functions plays a main role in the convergence theory of the gradient method and related descent methods for finding the unique global minimum x * of a given f .The update rule of the gradient method is where h > 0 is a step size which may depend on the current point x.It is well known that the fixed step size h = 2 L + µ achieves the optimal error reduction per step, which inductively implies the convergence of the method to x * .We refer to [6, Theorem 2.1.15]for details.
In a more general setting of proximal gradient methods, it has recently been shown by Taylor, Hendrickx, and Glineur [9, Theorem 3.3 with h = 0] that the same rate is also valid for the error in function value.Specifically, for any . This automatically follows from (1.2) and (1.3) by using a weaker strong convexity bound 0 < µ ≤ µ satisfying h = 2 L+µ and noting that 1 − hµ = hL − 1.The optimal choice in the estimates is h = 2/(L + µ) and leads to This estimate for one step of the method is highly nontrivial.Obviously, it implies the same inequality for the gradient descent method with exact line search (when the left side is minimized over all h), which has been obtained earlier in [2].Moreover, this estimate is known to be optimal in the class of L-smooth and µ-strongly convex functions.In fact, it is already optimal for quadratic functions in that class; see, e.g., [2,Example 1.3].Of course, in many applications the difference f (x) − f (x * ) is a natural error measure by itself.For example, for strongly convex quadratic functions it is proportional to the squared energy norm of the quadratic form.In general, for an L-smooth and µ-convex function we always have which clearly shows that f (x ) − f (x * ) → 0 for an iterative method implies x − x * → 0 for → ∞.Moreover, both error measures will exhibit the same R-linear convergence rate.The novelty of the estimate (1.4) is that one also has an optimal Q-linear rate for the function values, both for fixed step sizes and exact line search.(We refer to [8] for the definitions of R-and Q-linear rate.)However, compared to (1.1) an estimate like (1.4) is "more intrinsic", because the chosen inner product in R n enters only via the constants µ and L. In this short note, we illustrate this advantage by showing that (1.4) allows for a rather clean analysis of general variable metric methods, as well as gradient related methods subject to angle and scaling conditions.In addition, in Theorem 4.2 below we show how (1.4) already implies the sharp rates for inexact gradient methods under relative error bounds with fixed step sizes, based on a suitable change of the metric, thereby improving and simplifying a similar result in [3].

Variable metric method
We first consider the variable metric method.Here the update rule reads where A is a symmetric (with respect to the given inner product) and positive definite matrix.It is well known that such an update step can also be interpreted as a gradient step with respect to a modified inner product.This leads to the following result that will be the basis for our further considerations.
Theorem 2.1.Assume the eigenvalues of A are in the positive interval [λ, Λ] and define In particular, the step size h = h yields Proof.The result is obtained from (1.3) by noting that is the gradient of f with respect to the A-inner product x, y A = x, Ay .We have A for all x, y.These two conditions are equivalent to f being (L/λ)-smooth and (µ/Λ)-strongly convex in that A-inner product; see, e.g., [6, Theorems 2.1.5& 2.1.9].Thus in (1.2) and ( 1.3), we can replace µ with µ/Λ and L by L/λ, which is exactly the statement of the theorem.
An alternative, and somewhat more direct proof of Theorem 2.1 that does not require changing the inner product, can be given by applying the result (1.3) directly to the function Observe that κ f,A = κ f • κ A with κ A = Λ/λ ≥ 1 the condition number of A. The contraction factor in (2.2) will therefore always be worse than the original factor in (1.4), which corresponds to A = I.This might seem suboptimal since in Newton's method, and under additional regularity conditions, the contraction factor improves when choosing A = ∇ 2 f (x).However, for the general class of methods (2.1), the result in Theorem 2.1 is optimal.This can already be seen for the function f (x) = 1 2 x 2 , in which case (2.1) becomes the linear iteration x + = (I − hA −1 )x.Its contraction factor as predicted by (2.2) is bounded by (κ A − 1) 2 /(κ A + 1) 2 , which is indeed a tight bound: as in [2, Example 1.3], take A = diag(λ, . . ., Λ) and x = (x 1 , 0, . . ., 0, x n ).Then an exact line search yields x + = (κ A − 1)/(κ A + 1) • (−x 1 , 0, . . ., 0, x n ), and clearly there cannot be a better contraction factor with fixed step size.Note that the step size h in Theorem 2.1 also leads to equality in (2.2) when x is an eigenvector corresponding to λ or Λ.For a less trivial example, consider f (x) = 1 2 x, A −1 x .Then (2.1) becomes x + = (I − hA −2 )x and the same x from above now leads to a contraction with the factor (κ A 2 − 1) 2 /(κ A 2 + 1) 2 where indeed κ A 2 = κ f κ A , as predicted by Theorem 2.1.

Gradient related methods
Next we provide error estimates for gradient related descent methods under angle and scaling conditions.Specifically, we consider the update rule where −d is a descent direction, that is, d satisfies for some θ ∈ [0, π/2).This condition is very natural since it guarantees the convergence of (3.1); see, e.g., [7,Chapter 3.2].In particular, for the case of exact line search, it has been shown in [2, Theorem 5.1] that and that this Q-linear rate is optimal.For the case of quadratic functions this has been known before; see, e.g., [5].We also mention the result of [1, Theorem 3.3], which identifies the rate in (3.3) as optimal R-linear rate for exact line search when f is twice continuously differentiable.
Here, we aim to generalize this result to fixed step sizes.The extent to which this is possible depends on the available information about the quantities ∇f (x) , d , and ∇f (x), d .The basic idea is to interpret (3.1) as a variable metric method in order to apply Theorem 2.1.For this we need to find a symmetric and positive definite matrix A satisfying Ad = ∇f (x) and estimate its condition number.Such a matrix can be found explicitly using the following lemma, which originates from the SR1 update rule; see, e.g., [7].
is symmetric (for the given inner product), satisfies Bu = v, and has as its smallest and largest eigenvalues, respectively.Here, rr * denotes the rank-one matrix satisfying rr * x = r r, x for all x ∈ R n .
Proof.This is checked by a straightforward calculation.Obviously, the matrix I − r r * r,u equals the identity on the orthogonal complement of r.Its eigenvalue belonging to the eigenvector r is where one uses 1 − α cos θ = sin θ and α 2 = (1 − sin θ)/(1 + sin θ).Therefore, the largest eigenvalue of B is 1/α (with multiplicity n − 1), and the smallest eigenvalue is α.
With Lemma 3.1 and Theorem 2.1 at our disposal, we can state our main result. .
In particular, the step size h = h yields Proof.If d = 0, the assertion is trivially true.Let d = 0.By Lemma 3.1, there exists a symmetric and positive definite matrix of the form The assertion follows therefore directly from Theorem 2.1.
Remark 3.3.The condition (3.4) can be replaced with equivalent conditions such as for some σ > 0. An equivalent version of Theorem 3.2 is obtained by observing that cos θ = σc.
To achieve the optimal rate in Theorem 3.2, the exact values of θ and c need to be known in order to compute the optimal step size h.In practice, this is almost never the case and only bounds are available.We therefore formulate another, more practical result of (3.2) under the following relaxed angle and scaling conditions: there exists 0 Under these conditions, the eigenvalues of the matrix A = ∇f (x) d B in the proof of Theorem 3.2 can be bounded as The following result is then again immediately obtained from Theorem 2.1.
Theorem 3.4.Assume (3.5) and define In particular, the step size h = h yields We remark again that if c 1 = c 2 = d / ∇f (x) and θ = θ are known, the resulting statements from Theorem 3.4 coincide with those in Theorem 3.2.Remark 3.5.We conclude the section with a side remark.When just looking at the proofs of Theorems 3.2 or 3.4, it would be natural to ask if there exists a symmetric and positive definite matrix B (and thus A) with a smaller condition number than the one from Lemma 3.1.As for the SR1 update rule, when matrix B = B α in the lemma is regarded as a function of α = 0, then it is well known that the stated α is one of the minimizers for the condition number in the class of all positive definite B α (another is cos θ/(1 − sin θ)); see, e.g., [10].Indeed, any B with a smaller condition number would lead to a faster rate in Theorem 3.2 (via Theorem 2.1), which is not possible since the rate is known to be optimal when exact line search is used.This reasoning therefore provides a (rather indirect) proof for the following general statement.
Then (1 + sin θ)/(1 − sin θ) is the minimum possible (spectral) condition number among all symmetric and positive definite matrices B satisfying Bu = v.
While probably well known in the field, we did not find this fact explicitly stated in the literature.It is, of course, not very difficult to prove this result directly by an elementary calculation on 2 × 2 matrices.

Inexact gradient method
We now discuss the important case of an inexact gradient method, where instead of the angle and scaling conditions (3.5), it is assumed that for some ε ∈ [0, 1).This model is also considered in [2,3,4].Our aim is again deriving convergence rates for a fixed step size rule from the variable metric approach.Since the matrix A in the proof of Theorem 3.2 no longer provides the optimal rates in this case, we use a different construction.
Since Q is symmetric with eigenvalues ±1, the result follows.
Applying the lemma to u = d and v = ∇f (x), the following theorem on the inexact gradient model (4.1) is an immediate consequence of Theorem 2.1.Theorem 4.2.Assume ∇f (x) = 0 and (4.1) for some ε ∈ [0, 1) and define In particular, the step size h = h yields f The rate in (4.2) is optimal under the general assumption (4.1), in particular for quadratic f and d satisfying ∇f (x), d = cos θ d ∇f (x) with sin θ = ε.Trivially, for f (x) = 1 2 x 2 the estimate (4.2) is sharp for all d satisfying (4.1).
The result in Theorem 4.2 is not new.In [4,Proposition 1.5] it has been shown that κε−1 κε+1 2 is an upper bound for the R-linear convergence rate of the inexact gradient method with fixed step size h.According to [4, Remark 1.6], the estimate (4.2) per step is implicitly contained in the proof of [3,Theorem 5.3], which, however, is rather technical.In addition, the statement of [3,Theorem 5.3] itself covers the rate (4.2) only for a range ε ∈ [0, ε] with some ε < 2µ L+µ .Our proof via Lemma 4.1 provides a simple alternative for obtaining the result for all ε ∈ [0, 1) directly from the estimate (1.4) for the gradient method (which coincides with [3,Theorem 5.3] when ε = 0).

Conclusions
Based on the result (1.4) due to [9], we have derived optimal convergence rates for the function values in gradient related descent methods and inexact gradient methods with fixed step sizes for smooth and strongly convex functions.The results are obtained using an elementary variable metric approach, in which a single step is interpreted as a standard gradient step.This is possible since function values are a metric independent error measure.Compared to existing results, our proofs offer a more direct way for obtaining the convergence rate estimates of perturbed gradient methods given the rates of their exact counterpart.

Lemma 4 . 1 .
Let u, v ∈ R n such that v = 0 and u − v < v .There exists a positive definite matrix A that satisfies Au = v and has eigenvalues 1