Modified memoryless spectral-scaling Broyden family on Riemannian manifolds

This paper presents modified memoryless quasi-Newton methods based on the spectral-scaling Broyden family on Riemannian manifolds. The method involves adding one parameter to the search direction of the memoryless self-scaling Broyden family on the manifold. Moreover, it uses a general map instead of vector transport. This idea has already been proposed within a general framework of Riemannian conjugate gradient methods where one can use vector transport, scaled vector transport, or an inverse retraction. We show that the search direction satisfies the sufficient descent condition under some assumptions on the parameters. In addition, we show global convergence of the proposed method under the Wolfe conditions. We numerically compare it with existing methods, including Riemannian conjugate gradient methods and the memoryless spectral-scaling Broyden family. The numerical results indicate that the proposed method with the BFGS formula is suitable for solving an off-diagonal cost function minimization problem on an oblique manifold.


Introduction
Riemannian optimization has recently attracted a great deal of attention and has been used in many applications, including low-rank tensor completion [10,30], machine learning [17], and shape analysis [8].
Iterative methods for solving unconstrained optimization problems on the Euclidean space have been studied for a long time [18].Quasi-Newton methods and nonlinear conjugate gradient methods are the especially important ones and have been implemented in various software packages.
Here, quasi-Newton methods need to store dense matrices, so it is difficult to apply them to large-scale problems.Shanno [27] proposed a memoryless quasi-Newton method as a way to deal with this problem.This method [9,[12][13][14][15] has proven effective at solving large-scale unconstrained optimization problems.The concept is simple: an approximate matrix is updated by using the identity matrix instead of the previous approximate matrix.Similar to the case of nonlinear conjugate gradient methods, the search direction can be computed without having to use matrices and simply by taking the inner product without matrices.
Kou and Dai [9] proposed a modified memoryless spectral-scaling BFGS method.Their method involves adding one parameter to the search direction of the memoryless self-scaling BFGS method.In [13], Nakayama used this technique to devise a memoryless spectral-scaling Broyden family.In addition, he showed that the search direction is a sufficient descent direction and has the global convergence property.Nakayama, Narushima, and Yabe [15] proposed memoryless quasi-Newton methods based on the spectral-scaling Broyden family [3].Their methods generate a sufficient descent direction and have the global convergence property.
Many useful iterative methods for solving unconstrained optimization problems on manifolds have been studied (see [2,24]).They have been obtained by extending iterative methods in Euclidean space by using the concepts of retraction and vector transport.For example, Riemannian quasi-Newton methods [6,7] and Riemannian conjugate gradient methods [20,24,26,34] have been developed.Sato and Iwai [26] introduced scaled vector transport [26,Definition 2.2] in order to remove the assumption of isometric vector transport from the convergence analysis.Zhu and Sato [34] proposed Riemannian conjugate gradient methods that use an inverse retraction instead of vector transport.In [24], Sato proposed a general framework of Riemannian conjugate gradient methods.This framework uses a general map instead of vector transport and utilizes the existing Riemannian conjugate gradient methods such as ones that use vector transport, scaled vector transport [26], or inverse retraction [34].
In [19], Ring and Wirth proposed the BFGS method, which has a global convergence property under some convexity assumptions.Narushima et al. [16] proposed memoryless quasi-Newton methods based on the spectral-scaling Broyden family on Riemannian manifolds.They extended the memoryless spectralscaling Broyden family in Euclidean space to Riemannian manifolds with an additional modification to ensure a sufficient descent condition.Moreover, they presented a global convergence analysis under the Wolfe conditions.In particular, they did not assume convexity of the objective function or isometric vector transport.The results of the previous studies are summarized in Tables 1 and  2.
In this paper, we propose a modified memoryless quasi-Newton method based on the spectral-scaling Broyden family on Riemannian manifolds, exploiting the idea used in the paper [13].
In the case of Euclidean space, Nakayama [13] reported that the modified memoryless quasi-Newton method based on the spectral-scaling Broyden family shows good experimental performance with parameter tuning.Therefore, it is worth extending it to Riemannian manifolds.Our method is based on the memoryless quasi-Newton methods on Riemannian manifolds proposed by Narushima et al. [16] as well as on the modification by Kou and Dai [9].It uses a general map to transport vectors similarly to the general framework of Riemannian conjugate gradient methods [25].This generalisation allows us to use maps such as an inverse retraction [34] instead of vector transport.We show that our method generates a search direction satisfying the sufficient descent condition under some assumptions on the parameters (see Proposition 1).Moreover, we present global convergence analyses under the Wolfe conditions (see Theorem 2).Furthermore, we describe the results of numerical experiments comparing our method with the existing ones, including Riemannian conjugate gradient methods [20] and the memoryless spectral-scaling Broyden family on Riemannian manifolds [16].The key advantages of the proposed methods are the added parameter ξ k−1 and support for maps other than vector transports.As shown in the numerical experiments, the proposed method may outperform the existing methods depending on how the parameter ξ k−1 is chosen.It has an advantage over [16] in that it can use a map such as an inverse retraction, which is not applicable in [16].
This paper is organized as follows.Section 2 reviews the fundamentals of Riemannian geometry and Riemannian optimization.Section 3 proposes the modified memoryless quasi-Newton method based on the spectral-scaling Broyden family.Section 4 gives a global convergence analysis.Section 5 compares the proposed method with the existing methods through numerical experiments.Section 6 concludes the paper.

Mathematical preliminaries
Let M be a Riemannian manifold with Riemannian metric g.T x M denotes the tangent space of M at a point x ∈ M .The tangent bundle of M is denoted by The induced norm of a tangent vector η ∈ T x M is defined by η x := η, η x .For a given tangent vector η ∈ T x M , η ♭ represents the flat of η, i.e., η ♭ : T x M → R : ξ → η, ξ x .Let F : M → N be a smooth map between smooth manifolds; then, the derivative of F at x ∈ M is denoted by DF (x) : T x M → T F (x) N .For a smooth function f : M → R, gradf (x) denotes the Riemannian gradient at x ∈ M , i.e., a unique element of for all η ∈ T x M .Hessf (x) denotes the Riemannian Hessian at x ∈ M , which is defined as where ∇ denotes the Levi-Civita connection of M (see [2]).Definition 1.Any smooth map R : T M → M is called a retraction on M if it has the following properties.
• R x (0 x ) = x, where 0 x denotes the zero element of T x M ; • DR x (0 x ) = id TxM with the canonical identification T 0x (T x M ) ≃ T x M , where R x denotes the restriction of R to T x M .Definition 2. Any smooth map T : T M ⊕ T M → T M is called a vector transport on M if it has the following properties.
Let us consider an iterative method in Riemannian optimization.For an initial point x 0 ∈ M , step size α k > 0, and search direction η k ∈ T x k M , the k-th approximation to the solution is described as where R is a retraction.We define g k := gradf (x k ).Various algorithms have been developed to determine the search direction η k .We say that η k is a sufficient descent direction if the sufficient descent condition, holds for some constant κ > 0.
In [6,7,16], the search direction η k ∈ T x k M of Riemannian quasi-Newton methods is computed as where In [25], Sato proposed a general framework of Riemannian conjugate gradient methods by using a map T (k−1) : Assumption 1.There exist C ≥ 0 and K ⊂ N, such that for all k ∈ K, and for all k ∈ N − K, Note that inequality (5) is weaker than (4).For k satisfying the stronger condition (4), the assumption of Theorem 1 can be weakened.Further details can be found in [25,Remark 4.3].Assumption 1 requires that T (k) is an approximation of the differentiated retraction.Therefore, the differentiated retraction clearly satisfies the conditions of Assumption 1.In [25,Example 4.5] and [25,Example 4.6], Sato gives examples of maps T (k) satisfying Assumption 1 in the case of the unit sphere and Grassmann manifolds, respectively.In [34, Proposition 1], Zhu and Sato proved that the inverse of the retraction satisfies Assumption 1.

Memoryless spectral-scaling Broyden family
Let us start by reviewing the memoryless spectral-scaling Broyden family in Euclidean space.In the Euclidean setting, an iterative optimization algorithm updates the current iterate x k to the next iterate x k+1 with the updating formula, where α k > 0 is a positive step size.One often chooses a step size α k > 0 to satisfy the Wolfe conditions (see [31,32]), where 0 < c 1 < c 2 < 1.The search direction d k of the quasi-Newton methods is defined by where In this paper, we will focus on the Broyden family, written as where ). φ k−1 is a parameter, which becomes the DFP formula when φ k−1 = 0 or the BFGS formula when φ k−1 = 1 (see [18,28]).Here, if 11) is a convex combination of the DFP formula and the BFGS formula; we call this interval the convex class.
Zhang and Tewarson [33] found a better choice in the case φ k−1 > 1; we call this interval the preconvex class.In [3], Chen and Cheng proposed the Broyden family based on the spectral-scaling secant condition [4] as follows: where τ k−1 > 0 is a spectral-scaling parameter.Shanno [27] proposed memoryless quasi-Newton methods in which H k−1 is replaced with the identity matrix in (11).Memoryless quasi-Newton methods avoid having to make memory storage for matrices and can solve large-scale unconstrained optimization problems.In addition, Nakayama, Narushima and Yabe [15] proposed memoryless quasi-Newton methods based on the spectralscaling Broyden family by replacing H k−1 with the identity matrix in (12), i.e., where From ( 10) and ( 12), the search direction d k of memoryless quasi-Newton methods based on the spectral-scaling Broyden family can be computed as In [15], they also proved global convergence for step sizes satisfying the Wolfe conditions (see [15,Theorem 3.1] and [15,Theorem 3.6]).In [9], Kou and Dai proposed a modified memoryless self-scaling BFGS method and showed that it generates a search direction satisfying the sufficient descent condition.
Moreover, Nakayama [13] used the modification by Kou and Dai and proposed a search direction d k defined by where ξ k−1 ∈ [0, 1] is a parameter.

Memoryless spectral-scaling Broyden family on Riemannian manifolds
We define ).The Riemannian quasi-Newton method with the spectral-scaling Broyden family [16, (23)] is written as where Here, φ k−1 ≥ 0 is a parameter, and τ k−1 > 0 is a spectral-scaling parameter.The idea of behind the memoryless spectral-scaling Broyden family is very simple: replace Hk−1 with id Tx k−1 M .In [16], a memoryless spectral-scaling Broyden family on a Riemannian manifold is proposed by replacing Hk−1 with id Tx k M .To guarantee global convergence, they replaced y k−1 by z k−1 ∈ T x k M satisfying the following conditions [16, (27)]: for positive constants ν, ν > 0, Here, we can choose z k−1 by using Li-Fukushima regularization [11], which is a Levenberg-Marquardt type of regularization, and set where and ν > 0. We can also use Powell's damping technique [18], which sets where ν ∈ (0, 1) and The proof that these choices satisfy conditions ( 15) and ( 16) is given in [16, Proposition 4.1].Thus, a memoryless spectral-scaling Broyden family on a Riemannian manifold [16, (28)] can be described as where Here, γ k−1 > 0 is a sizing parameter.From (3), we can compute the search direction of the memoryless spectral-scaling Broyden family on a Riemannian manifold as follows:

Proposed algorithm
Let T (k−1) : T x k−1 M → T x k M be a map which satisfies Assumption 1. Furthermore, we define We propose the following search direction of the modified memoryless spectralscaling Broyden family on a Riemannian manifold: where ξ k−1 ∈ [0, 1] is a parameter, and z k−1 ∈ T x k M is a tangent vector satisfying ( 15) and ( 16).Note that equation ( 21) has not only added ξ k−1 , but also changed the definition of the two tangent vectors y k−1 and s k−1 for determining z k−1 .The proposed algorithm is listed in Algorithm 1.Note that Algorithm 1 is a generalization of memoryless quasi-Newton methods based on the spectral-scaling Broyden family proposed in [16].In fact, if ξ k−1 = 1 and Algorithm 1 Modified memoryless quasi-Newton methods based on spectralscaling Broyden family on Riemannian manifolds.Require: Compute a step size α k > 0 satisfying the Wolfe conditions (8) and (9).
Assumption 3. We suppose that there exists Γ > 0 such that Zoutendijk's theorem about the T (k) -Wolfe conditions [25, Theorem 5.3], is described as follows: Theorem 1. Suppose that Assumptions 1 and 2 hold.Let (x k ) k=0,1,••• be a sequence generated by an iterative method of the form (1). We assume that the step size α k satisfies the T (k) -Wolfe conditions (8) and (9).If the search direction η k is a descent direction and there exists µ > 0, such that η k satisfies g k x k ≤ µ η k x k for all k ∈ N − K, then the following holds: where K is the subset of N in Assumption 1.
We present a proof that the search direction ( 21) satisfies the sufficient descent condition (2), which involves generalizing the Euclidean case in [13, Proposition 3.1] and [15,Proposition 2.1].
, where where 0 ≤ ξ < 1, satisfies the sufficient descent condition (2) with . Proof.The proof involves extending the discussion in [13, Proposition 3.1] to the case of Riemannian manifolds.For convenience, let us set From the definition of the search direction (2), we have From the relation 2 u, v ≤ u 2 + v 2 for any vectors u and v in an inner product space, we obtain Here, we consider the case 0 ≥ 0 and the Cauchy-Schwarz inequality, we obtain Next, let us consider the case 1 Therefore, the search direction ( 21) satisfies the sufficient descent condition (2), i.e., g k , η k x k ≤ −κ g k 2 x k , where Now let us show the global convergence of Algorithm 1.
Theorem 2. Suppose that Assumptions 1, 2 and 3 are satisfied.Assume further that 0 , where τ > 0 and 1 < φ < 2.Moreover, suppose that ξ k ∈ [0, 1] satisfies (22).Let (x k ) k=0,1,••• be a sequence generated by Algorithm 1, and let the step size α k satisfy the T (k) -Wolfe conditions (8) and (9).Then, Algorithm 1 converges in the sense that Proof.For convenience, let us set From ( 21) and the triangle inequality, we have Here, from the Cauchy-Schwarz inequality, we obtain which together with (15) (i.e., ν s k−1 which, together with g k x k ≤ Γ, give To prove convergence by contradiction, suppose that there exists a positive constant ε > 0 such that It follows from the above inequalities that This contradicts the Zoutendijk theorem (Theorem 1) and thus completes the proof.

Numerical experiments
We compared the proposed method with existing methods, including the Riemannian conjugate gradient methods and memoryless spectral-scaling Broyden family.In the experiments, we implemented the proposed method as an optimizer of pymanopt (see [29]) and solved two Riemannian optimization problems (Problems 1 and 2).Problem 1 is the Rayleigh-quotient minimization problem on the unit sphere [2,Chapter 4.6].
where • denotes the Euclidean norm.
In the experiments, we set n = 100 and generated a matrix B ∈ R n×n with randomly chosen elements by using numpy.random.randn.Then, we set a symmetric matrix A = (B + B ⊤ )/2.
Absil and Gallivan [1, Section 3] introduced an off-diagonal cost function.Problem 2 is an off-diagonal cost function minimization problem on an oblique manifold.
where • F denotes the Frobenius norm and ddiag(M ) denotes a diagonal matrix M with all its off-diagonal elements set to zero.
In the experiments, we set N = 5, n = 10, and p = 5 and generated five matrices B i ∈ R n×n (i = 1, • • • , 5) with randomly chosen elements by using numpy.random.randn.Then, we set symmetric matrices The experiments used a MacBook Air (M1, 2020) with version 12.2 of the macOS Monterey operating system.The algorithms were written in Python 3.11.3 with the NumPy 1.25.0package and the Matplotlib 3.7.1 package.Python implementations of the methods used in the numerical experiments are available at https://github.com/iiduka-researches/202307-memoryless.
As the measure for these comparisons, we calculated the performance profile P s : R → [0, 1] [5] defined as follows: P and S be the sets of problems and solvers, respectively.For each p ∈ P and s ∈ S, t p,s := (iterations or time required to solve problem p by solver s).
We defined the performance ratio r p,s as r p,s := t p,s min s ′ ∈S t p,s ′ .
Next, we defined the performance profile P s for all τ ∈ R as where |A| denotes the number of elements in a set A. In the experiments, we set |P| = 100 for Problems 1 and 2, respectively.Figures 1-4 plot the results of our experiments.In particular, Figure 1 shows the numerical results for Problem 1 with Li-Fukushima regularization ( 17) and (18).It shows that Algorithm 1 with ξ = 0.1 has much higher performance than that of Algorithm 1 with ξ = 1 (i.e., the existing method) regardless of whether the BFGS formula or the preconvex class is used.In addition, we can see that Algorithm 1 with ξ = 0.8 and ξ = 1 have about the same performance.Figure 2 shows the numerical results for solving Problem 1 with Powell's damping technique (19) and (20).It shows that Algorithm 1 with ξ = 0.1 is superior to Algorithm 1 with ξ = 1 (i.e., the existing method), regardless of whether the BFGS formula or the preconvex class is used.Moreover, it can be seen that Algorithm 1 with ξ = 0.8 and ξ = 1 has about the same performance.19) and (20).
Figure 3 shows numerical results for Problem 1 with Li-Fukushima regularization ( 17) and (18).It shows that if we use the BFGS formula (i.e., φ k = 1), then Algorithm 1 with ξ = 0.8 and the HZ method outperform the others.However, Algorithm 1 with the preconvex class is not compatible with is an off-diagonal cost function minimization problem on an oblique manifold.Figure 4 shows the numerical results for solving Problem 1 with Powell's damping technique (19) and (20).It shows that if we use the BFGS formula (i.e., φ k = 1), then Algorithm 1 with ξ = 0.8 or ξ = 1 is superior to the others.However, Algorithm 1 with the preconvex class is not compatible with is an off-diagonal cost function minimization problem on an oblique manifold.Therefore, we can see that Algorithm 1 with the BFGS formula (i.e., φ k = 1) is suitable for solving an off-diagonal cost function minimization problem on an oblique manifold.

Conclusion
This paper presented a modified memoryless quasi-Newton method with the spectral-scaling Broyden family on Riemannian manifolds, i.e., Algorithm 1. Algorithm 1 is a generalization of the memoryless self-scaling Broyden family on Riemannian manifolds.Specifically, it involves adding one parameter to the search direction.We use a general map instead of vector transport, similarly to the general framework of Riemannian conjugate gradient methods.Therefore, we can utilize methods that use vector transport, scaled vector transport, or an inverse retraction.Moreover, we proved that the search direction satisfies the sufficient descent condition, and the method globally converges under the Wolfe conditions.Moreover, the numerical experiments indicated that the proposed method with the BFGS formula is suitable for solving an off-diagonal cost function minimization problem on an oblique manifold.

Assumption 2 .
Let f : M → R be a smooth, bounded below function with the following property: there exists L > 0 such that

Figure 1 :
Figure 1: Performance profiles of each algorithm versus the number of iterations (a) and the elapsed time (b) for Problem 1. z k is defined by Li-Fukushima regularization (17) and (18).

Figure 2 :
Figure 2: Performance profiles of each algorithm versus the number of iterations (a) and the elapsed time (b) for Problem 1. z k is defined by Powell's damping technique (19) and(20).

Figure 3 :
Figure 3: Performance profiles of each algorithm versus the number of iterations (a) and the elapsed time (b) for Problem 2. z k is defined by Li-Fukushima regularization (17) and (18).

Figure 4 :
Figure 4: Performance profiles of each algorithm versus the number of iterations (a) and the elapsed time (b) for Problem 2. z k is defined by Powell's damping technique (19) and(20)

Table 1 :
Results of previous studies on Quasi-Newton methods in Euclidean space and Riemannian manifolds.