Existence results on Lagrange multiplier approach for gradient flows and application to optimization

This paper deals with the geometric numerical integration of gradient flow and its application to optimization. Gradient flows often appear as model equations of various physical phenomena, and their dissipation laws are essential. Therefore, dissipative numerical methods, which are numerical methods replicating the dissipation law, have been studied in the literature. Recently, Cheng, Liu, and Shen proposed a novel dissipative method, the Lagrange multiplier approach, for gradient flows, which is computationally cheaper than existing dissipative methods. Although their efficacy is numerically confirmed in existing studies, the existence results of the Lagrange multiplier approach are not known in the literature. In this paper, we establish some existence results. We prove the existence of the solution under a relatively mild assumption. In addition, by restricting ourselves to a special case, we show some existence and uniqueness results with concrete bounds. As gradient flows also appear in optimization, we further apply the latter results to optimization problems.


Introduction
In this paper, we consider the numerical integration of the gradient floẇ where x 0 ∈ R n is an initial condition, V : R n → R is a differentiable function, and the matrix D ∈ R n×n is positive definite but not necessarily symmetric. Gradient flows (1) are important class of ordinary differential equations (ODEs) that describe various physical phenomena. Consequently, numerical methods for gradient flow (1) have also been intensively studied. In particular, specialized numerical schemes that replicate the dissipation law d dt V (x(t)) ≤ 0 have been devised and investigated. Techniques devising and analyzing such a specialized numerical scheme replicating a geometric property of ODEs are known as "geometric numerical integration" techniques (cf. [6]).
The discrete gradient method [4] (see also [10]) is the most popular specialized numerical method for gradient flows. Schemes based on the discrete gradient method are often superior to general-purpose methods, particularly for numerically difficult differential equations.
Although these schemes allow us to employ a larger step size than general-purpose methods, they are usually more expensive per step. Most of these schemes require solving an n-dimensional nonlinear equation per step.
Consequently, several techniques to enhance the computational efficiency have been studied in the literature (see, e.g., [9] and references therein). In particular, Cheng, Liu, and Shen [2] proposed the Lagrange multiplier approach, which replicates the dissipation law. Their proposed method is based on splitting the function V in the form where Q ∈ R n×n is symmetric, and E : R n → R is a differentiable function. Note that the splitting is not unique (E may contain a quadratic term); however, when we consider physical problems, the function V often includes a quadratic term so that we can naturally obtain a splitting (see, e.g., [15]). Then, we can construct a numerical scheme preserving the dissipation law by using the implicit midpoint rule for the quadratic term and special treatment for the nonlinear term E, respectively (see Section 2.1 for details). The resulting scheme requires solving a scalar nonlinear equation (and n-dimensional linear equations) per step, which is quite cheap.
Unfortunately, however, the existence of a solution of the scalar nonlinear equation is not known in the literature. Therefore, in this paper, we establish some existence results. First, we prove the existence of the solution under a relatively mild assumption (Section 3.1). Second, by restricting ourselves to a case where Q is the zero matrix, we establish several existence and uniqueness results with concrete bounds on the solution (Section 3.2).
The latter results are useful in the application of the Lagrange multiplier method to optimization problems because the gradient flow (1) also appears in the context of optimization. Investigations on the relationship between optimization methods and the discretization of ordinary differential equations (ODEs) have been reported in the 1980s (e.g., [1,14,18]). In addition, inspired by the pioneering work of Su, Boyd, and Candès [16] on Nesterov's accelerated gradient method, research in this direction has been active again in recent years (see [17] and the references therein).
When considering the optimization, the dissipation law is also important: it is not merely a guarantee of the monotonic decrease of the function value, but can also be used to prove its convergence rate. Indeed, the discrete gradient method has recently been applied to optimization problems [12,13,3].
Because the Lagrange multiplier method is much cheaper than the discrete gradient method per step, we consider its application to optimization problems. Indeed, when Q is the zero matrix and D is the identity matrix, the resulting scheme can be regarded as the well-known steepest descent method, adopting a new step size criterion. For the scheme, we show the convergence rates for several function classes: (i) general L-smooth functions, (ii) convex functions, and (iii) functions that satisfy the Polyak-Lojasiewicz inequality (Section 4). We also introduce a relaxation technique to further enhance the computational efficiency (Section 5).
It may seem as though the scheme is merely a variant of the basic existing method; moreover, as shown in numerical experiments later, the actual behavior is almost the same as that of the existing method.
However, the optimization methods proposed in this paper have the advantage that the relationship between continuous and discrete systems is clear in the proof of convergence rates (see, e.g., Theorems 2.5 and 4.1). In existing research considering the correspondence between continuous and discrete systems, although the discussion on continuous systems is simple, it is often very complicated to prove the corresponding property in discrete systems. A limitation of this paper is that we deal with the simplest gradient flows; however, it suggests that the above issues can be overcome by geometric numerical integration techniques even when we are dealing with more complicated ODEs that appear in optimization.
The remainder of this paper is organized as follows. Section 2 presents the Lagrange multiplier approach and the relation between gradient flow and optimization. We show several existence results in Section 3 and convergence rates as an optimization method in Section 4. In Section 5, we introduce a relaxation technique. These results are confirmed by numerical experiments in Section 6. Finally, Section 7 concludes this paper.

Preliminaries
2.1. Lagrange multiplier approach. In this section, we review the numerical method proposed by Cheng, Liu, and Shen [2], which preserves the dissipation law.
Remark 2.1. The setting x * k+1/2 = (3x k − x k−1 )/2 is to achieve the second order accuracy, whereas the setting x * k+1/2 = x k only achieves the first order accuracy. Although the former setting is better in terms of accuracy, the latter setting is easier to deal with in mathematical analysis. In addition, when we employ the latter, the Lagrange multiplier approach can be regarded as a special case of the discrete gradient method [4,10,11]: Q x k+1 +x k 2 + η k ∇E(x k ) satisfies the conditions of the discrete gradient.

By introducing
we can rewrite (4a) as follows: x k+1 = p k − hη k q k . Here, p k and q k can be computed by solving the linear equations with the same coefficient matrix I + h 2 DQ, which is invertible for sufficiently small h. Thus, we can compute η k by solving The scheme (4) requires solving two linear equations with n variables and a scalar nonlinear equation; moreover, because the coefficient matrix is constant, we can solve them quite efficiently. However, existence results for the nonlinear equation F h (η k ; x k ) = 0 have not been established in the literature.

Gradient flow and optimization.
In this section, we consider the unconstrained optimization problem min x∈R n f (x), where the function f : R n → R is assumed to be L-smooth (i.e., the gradient is L-Lipschitz continuous) and satisfies arg min f = ∅. Under this assumption, there is an optimal solution x and an optimal value f = f (x ). In particular, we consider the relationship between the problem and gradient flow (5) which is a special case of the general gradient flow (1).
L-smooth functions satisfy the following inequalities.
3. If f is L-smooth, the following inequalities hold for all x, y ∈ R n : We sometimes assume that the objective function f is convex or satisfies the Polyak-Lojasiewicz (P L) inequality with parameter µ > 0 (cf. [8]): Note that a µ-strongly convex function satisfies the P L inequality with parameter µ. In addition, if a function is L-smooth and satisfies the P L inequality with parameter µ, L ≥ µ holds [5].
When the objective function f is strictly convex, the gradient flow (5) satisfies the following proposition.
Proposition 2.4 (cf. [7]). Let f be a strictly convex function. Then, f is a Lyapunov function of the gradient flow (1), and lim t→∞ x(t) = x holds for any initial condition x 0 ∈ R n .
Based on the above fact, some researchers have posited the idea of using a numerical method for gradient flow as an optimization method. For example, the explicit Euler method coincides with the steepest descent method. However, we should carefully choose the step size h to ensure the convergence of this method.
Here, because the convergence in continuous time, such as in Proposition 2.4, is based on the dissipation law, the numerical method replicating the dissipation law can be regarded as an optimization method (see, e.g., [3]). Before stepping into the property in discrete systems, we review it in continuous systems in this section. The gradient flow (5) satisfies the following three theorems.
Proof. Since f is an optimal value, Theorem 2.6. If f is convex, the solution x of the gradient flow (5) satisfies where the last inequality is due to convexity. Therefore, holds, which proves the theorem.
Ehrhardt, Riis, Ringholm, and Schönlieb [3] showed that the discrete gradient method with several known constructions of the discrete gradient satisfies

Existence theorems
In this section, we establish existence theorems for the Lagrange multiplier method (4) under the assumption x * k+1/2 = x k . First, we establish an existence result for general splitting in Section 3.1. Then, we restrict ourselves to a special case and obtain existence results with bounds on the solution η k .
3.1. Existence results in general setting. In this section, we prove the existence of the solution η of the nonlinear equation F h (η; x k ) = 0 for sufficiently small h by using the intermediate value theorem. For this purpose, we first prove that F h (η; x k ) > 0 holds for sufficiently large η (Lemma 3.1). Then, we prove that there exists η satisfying F h (η; x k ) < 0 when ∇E(x k ), D∇V (x k ) = 0 (Lemma 3.2). These lemmas imply the desired existence theorem (Theorem 3.3) on the case ∇E(x k ), D∇V (x k ) = 0. In addition, even in the case ∇E(x k ), D∇V (x k ) = 0, by introducing a small perturbation to the splitting, we can return to the case ∇E(x k ), D∇V (x k ) = 0.
Let us denote the minimum value of the Rayleigh quotient of matrix A by ω min (A). Note that, if matrix A is symmetric, ω min (A) coincides with the minimum eigenvalues of A. Moreover, since the matrix D is positive definite, ω min (D) ≥ 0 and ω min D −1 ≥ 0 hold.
Proof. Because of the assumption on h, the matrix I + (h/2)DQ is invertible. Therefore, the assumption ∇E(x k ) = 0 implies q k = 0.
Since E is L E -smooth, we use Lemma 2.3 and obtain The right-hand side is a quadratic function with respect to η, whose coefficient of the highest degree is positive: The right-hand side is a quadratic function with respect to η such that its coefficient of the highest degree is positive. In addition, under the assumption of the lemma, the quadratic function has two distinct real roots. Therefore, for any η between these two real roots, ∂ ∂h F h (η; x k ) h=0 < 0 holds. This implies the lemma because F 0 (η; x k ) = 0 holds and F h (η; x k ) is continuous with respect to h for sufficiently small h.
Combining Lemmas 3.1 and 3.2, we obtain the following existence theorem because of the intermediate value theorem.
then, for all sufficiently small h, there exists a solution of the scheme (4).
Remark 3.4. The above argument implies that the scalar nonlinear equation F h (η; x k ) = 0 has at least two solutions. In this sense, the usual uniqueness does not hold for the schemes. However, roughly speaking, the proof of Lemma 3.2 implies that the two solutions are close to 1 and − ∇E(x k ), DQx k / ∇E(x k ), D∇E(x k ) for sufficiently small h, respectively. Since the solution of the continuous system (4) is η(t) = 1, the solution that is closest to 1 should be used in numerical computation.
Finally, we consider the case ∇E(x k ), D∇V (x k ) = 0. If ∇V (x k ) = 0 holds, x k is an equilibrium point of the system. Therefore, we focus on the case ∇V (x k ) = 0 hereafter. In this case, by introducing a small perturbation to the splitting (2), we can use Theorem 3.3.
For an arbitrary = 0, we consider the splitting Then, we see where the most right-hand side is nonzero because D is positive definite and ∇V (x) = Qx + ∇E(x). Therefore, Theorem 3.3 implies that the Lagrange multiplier scheme with the splitting (7) has a solution. Consequently, by using the perturbed scheme only when ∇E(x k ), D∇V (x k ) = 0, we can continue to compute numerical solutions.

3.2.
Existence results in a special case. In this section, we further assume D = I and Q is the zero matrix. The results in this section can be extended to the case with general D (see Appendix A); however, here we focus on the simple gradient flow (5) because the existence results in this case can be utilized in optimization (see Sections 2.2 and 4).
In this case, the scheme can be written in the form Then, x k+1 can be computed by solving a scalar nonlinear equation: This equation has a trivial solution η k = 0 (cf. Remark 3.4), and we prove an existence theorem below for a nontrivial solution.
Theorem 3.5. Let f : R n → R be an L-smooth function satisfying arg min f = ∅. Then, for any x k ∈ R n , there exists an η k that satisfies F h (η k ; x k ) = 0 and Proof. In this proof, we use the notation d k := −∇f (x k ) for brevity. If d k = 0, F h (η k ; x k ) = 0 holds for any η k ∈ R so that the theorem holds. Therefore, we focus on the case d k = 0 hereafter. Then, because arg min f = ∅, f is bounded from below so that lim η→∞ F h (η; x k ) = ∞ holds.
Because we assume that f is L-smooth, the second inequality of Lemma 2.3 implies Therefore, F h (η LB ; x k ) ≤ 0 holds, which proves the theorem due to the intermediate value theorem.
The theorem above gives the lower bound of the nontrivial solution, and the following theorem gives the upper bound for a sufficiently small step size h.
Proof. From (9), F h (η k ; x k ) < 0 holds for any η k ∈ (0, η LB ). Then, by using the first inequality in Lemma 2.3, we see (the proof is similar to the proof of Theorem 3.5). Therefore, F h (η k ; x k ) > 0 holds for any η k > 1 − Lh 2 −1 , which proves the theorem.
Moreover, if f is convex or satisfies the P L inequality (6), there is an upper bound that is valid for any step size h.
Theorem 3.7. Let f : R n → R be an L-smooth function satisfying arg min f = ∅. If f is convex and ∇f (x k ) = 0 holds, then there exists a unique nontrivial solution η k of the nonlinear equation Proof. The convexity of f implies that F h (η k ; x k ) is strictly convex with respect to η k such that the nontrivial solution is unique.
Since f is convex, we see that which proves the theorem owing to the intermediate value theorem.

Convergence rates of the Lagrange multiplier method as an optimization method
The scheme (8) described in the previous section can be interpreted as the steepest descent method with a new step-size criterion (8b). In this section, we show the convergence rates corresponding to Theorems 2.5 to 2.7. Hereafter, we assume f is L-smooth and arg min f = ∅ holds.
First, we establish the discrete counterpart of Theorem 2.5 as follows.
k=0 be a sequence satisfying (8), and η k = 0 for any non-negative integer k. Then, the following inequalities hold: Proof. Similar to the proof of Theorem 2.5, we see that By the definition of η LB , the estimation above proves the theorem.
From the theorem above, we obtain the following result when f is coercive.
k=0 be a sequence satisfying (8), and η k = 0 for any non-negative integer k. Then, the sequence {x k } ∞ k=0 has an accumulation point; moreover, ∇f (x * ) = 0 holds for any accumulation point x * . Moreover, if f is convex or satisfies the P L inequality, we show the discrete counterparts of Theorems 2.6 and 2.7.  (8), and η k = 0 for any non-negative integer k.
Proof. Let us introduce the discrete counterpart of E in the proof of Theorem 2.6. Then, we see Here, the last term on the right-hand side can be evaluated as follows: Using this evaluation, we see that Because η k ≥ Lh 2 + 1 −1 , there exists k 0 ∈ N such that 1 2 − k i=0 η i ≤ 0 holds for any k ≥ k 0 . Therefore, we see that holds.
k=0 be a sequence satisfying (8), and η k = 0 for any non-negative integer k. If f satisfies the Polyak-Lojasiewicz inequality (6) with parameter µ > 0, the sequence Proof. We introduce the discrete counterpart L k := f (x k ) − f of L in the proof of Theorem 2.7. Then, we see that Because 1 + r ≤ e r holds for any real number r, we obtain

Some relaxations of the Lagrange multiplier method
As an optimization method, the scheme (8) is still more expensive than the standard optimization methods. Therefore, in this section, we propose a relaxation of the scheme (8) that allows us to use a backtracking technique.
In view of the dissipation law (Theorem 2.2), condition (8b) can be relaxed to F h (η k ; x k ) ≤ 0: we consider Theorem 5.1. A solution x k+1 of the scheme (10) satisfies the discrete dissipation law We see that which proves the theorem.
Because the discrete dissipation law is crucial in the discussion in the previous section, we can prove the convergence rates even after this relaxation (Section 5.1); moreover, we propose another method to adaptively change h at every step, and also show convergence rates for it in Section 5.2.

5.1.
A relaxation of the Lagrange multiplier method. In this section, we consider Algorithm 1.

Algorithm 1 Backtracking
Because Theorems 2.5 to 2.7 rely on the lower bound of η k as well as the discrete dissipation law, we establish the following lemma.
Lemma 5.2. The iteration of backtracking in Algorithm 1 stops at most log α η LB times so that η k ≥ αη LB holds.
By using the lemma, we obtain the following convergence results. We omit the proof because it can be proved in a manner similar to that in Theorems 2.5 to 2.7.
Moreover, if f is convex, holds.

5.2.
Adaptive step size. In this section, we consider adaptively changing the step size h k in every step. Here, instead of F h (η k ; x k ) ≤ 0, we use the condition F h k (η k ; x k ) ≤ 0.Then, h k+1 is defined by h k+1 = h k η k /η * , which is intended to maintain η k+1 around a fixed constant η * . As shown in the numerical experiments in the next section, this simple strategy reduces the number of backtracking iterations, and the numerical result does not depend significantly on the choice of h 0 .
Algorithm 2 Adaptive step size In view of Lemma 5.2, we see η k ≥ 2α Lh k +2 . The assumption η * < α ensures that {h k } ∞ k=0 is bounded from below, as shown in the lemma below. This lower bound of {h k } ∞ k=0 will be used in the proof of convergence rates.
η * L , then h k ≥ h LB holds for any positive integer k. Proof. We prove the lemma by induction. Suppose that h k ≥ h LB holds. Then, we see that which proves the lemma.

Convex functions.
In this section, we deal with convex functions.
Theorem 5.5. We assume that h 0 ≥ h LB and η * ≥ 1 2 hold. If f is convex, the sequence {x k } ∞ k=0 obtained by Algorithm 2 satisfies Proof. Let us introduce the discrete counterpart of E in the proof of Theorem 2.6. Then, similar to the proof of Theorem 4.3, we see that holds owing to the assumption η * ≥ 1 2 , we see that E k+1 − E k ≤ 0. Therefore, we see that which proves the theorem.

5.2.2.
Functions satisfying P L inequality. In this section, we deal with functions satisfying P L inequality. In this case, we need the upper bound of {h k } ∞ k=0 as well as the lower bound. Lemma 5.6. Assume that f satisfies the Polyak-Lojasiewicz inequality (6) with parameter µ > 0, and h 0 ≤ h UB := 1 2µ(η * ) 2 holds. Then, h k ≤ h UB holds for any positive integer k.
Proof. In a manner similar to the proof of Theorem 3.8, we see that η k ≤ 1 2µh k . Then, we prove the lemma by induction. Suppose that h k ≤ h UB holds. Then, we see that which proves the lemma.
where κ := L/µ is the condition number.
Proof. We introduce the discrete counterpart L k := f (x k ) − f of L in the proof of Theorem 2.7. Then, we see that which proves the theorem.

Numerical experiments
The efficacy of the Lagrange multiplier method for differential equations is well described by Cheng, Liu, and Shen [2]. Therefore, in this section, we focus on the application for optimization: we compare Algorithms 1 and 2 with the steepest descent method with a fixed step size h = 1/L and the step size satisfying the standard Armijo rule: where c ∈ (0, 1) is a parameter, and h k is obtained by a standard backtracking line search with the parameter α ∈ (0, 1).
Throughout the numerical experiment in this section, the parameter α in the Armijo rule and Algorithms 1 and 2 is fixed at α = 0.8. Because we investigate the difference in the results depending on the step size criteria in this experiment, we choose the parameter α corresponding to a relatively precise line search. In addition, we fix the parameter η * = 0.5 in view of Theorem 5.5.
6.1. Quadratic function. First, we consider the quadratic function where A ∈ R n×n and b ∈ R n . In this section, we fix n = 500 and b ∈ R n , whose elements are independently sampled from the normal distribution N (0, 5). We also fix the symmetric positive definite matrix A ∈ R n×n , defined as A = Q ΛQ by using a diagonal matrix Λ, whose elements are sampled from a uniform distribution on [0.001, 1], and an orthogonal matrix Q that was sampled from the Haar measure on the orthogonal group. The resulting matrix A has the maximum eigenvalue of L ≈ 0.998 and minimum eigenvalue of µ ≈ 0.0022. We set the initial step size of the backtracking line search for the Armijo rule to 10.    Table 1 summarizes the average step size and the number of backtracking iterations. In Figure 1, we omit the Armijo rule with c = 0.1, Algorithm 1 with h = 100, and Algorithm 2 with h 0 = 1, 100 because they are very similar to the Armijo rule with c = 10 −4 , Algorithm 1 with h = 10, and Algorithm 2 with h 0 = 10, respectively.
The results of Algorithm 1 with an appropriate h and Algorithm 2 are similar to those of the Armijo rule with a small c. Because the Armijo rule with c = 0.5 is similar to the exact line search for quadratic functions, it overwhelms the other methods. 6.2. Log-Sum-Exp function. Second, we consider the Log-Sum-Exp function: where a i ∈ R n (1 ≤ i ≤ m), b i ∈ R (1 ≤ i ≤ m) and ρ > 0. In this section, we fix n = 50, m = 200, and ρ = 20. We also fix a i and b i , whose elements are independently sampled from the normal distribution N (0, 1) and N (0, √ 2), respectively. The resulting a i satisfies max 1≤k≤m a k 2 ≈ 42.687, and the Lipschitz constant L satisfies L ≤ max 1≤k≤m a k 2 . We set the initial step size of the backtracking line search for the Armijo rule to 100.    Table 2 summarizes the average step size and the number of backtracking iterations. The results of Algorithm 1 with an appropriate h and Algorithm 2 are similar to those of the Armijo rule with a small c. Although the Armijo rule with c = 0.5 converges faster than the other methods, the rate itself is similar to Algorithm 1 with an appropriate h and Algorithm 2.
6.3. A nonconvex function satisfying P L inequality. Finally, we consider the function used in [3], where b ∈ R n is a vector that satisfies b = 1. This function is 8-smooth, nonconvex, and satisfies the Polyak-Lojasiewicz inequality 6 with parameter µ = 1/32. In this section, we fix n = 50 and b = v/ v ∈ R n , where the elements of v ∈ R n are independently sampled from the normal distribution N (0, 1). We set the initial step size of the backtracking line search for the Armijo rule to 10. Figure 3 summarizes the evolution of function values and Table 3

Conclusion
In this paper, we established existence results on the Lagrange multiplier approach, a recent geometric numerical integration technique, for the gradient system. In addition, we showed that, when Q is the zero matrix, the Lagrange multiplier approach reads a new step-size criterion for the steepest descent method. Thanks to the discrete dissipation law, the convergence rates of the proposed method for several cases can be proved in a form similar to the discussions on ODEs. In this paper, we focused only on the simplest gradient flow, but the results suggest that geometric numerical integration techniques can be effective for other ODEs appearing in optimization problems.
Several issues remain to be investigated. First, it would be interesting to investigate the application of geometric numerical integration techniques to other ODEs that appear during optimization. Second, the existence results in this paper are only for a special case of the Lagrange multiplier approach. Because the assumption x * k+1/2 := x k is a bit restrictive in the usual numerical integration of ODEs and PDEs, it is important to generalize the existence results. we further assume D = I in Section 3.2). In this case, the scheme can be written as Then, x k+1 can be computed by solving a scalar nonlinear equation Even in this case, the counterparts of Theorems 3.5 to 3.8 hold as follows. We omit their proofs because they are similar to those of the counterparts in Section 3.2.
Theorem A.1. For any x k ∈ R n , there exists an η k that satisfies F h (η k ; x k ) = 0 and η k ≥ 1 +