Riemannian Trust Regions with Finite-Difference Hessian Approximations are Globally Convergent
The Riemannian trust-region algorithm (RTR) is designed to optimize differentiable cost functions on Riemannian manifolds. It proceeds by iteratively optimizing local models of the cost function. When these models are exact up to second order, RTR boasts a quadratic convergence rate to critical points. In practice, building such models requires computing the Riemannian Hessian, which may be challenging. A simple idea to alleviate this difficulty is to approximate the Hessian using finite differences of the gradient. Unfortunately, this is a nonlinear approximation, which breaks the known convergence results for RTR.
We propose RTR-FD: a modification of RTR which retains global convergence when the Hessian is approximated using finite differences. Importantly, RTR-FD reduces gracefully to RTR if a linear approximation is used. This algorithm is available in the Manopt toolbox.
KeywordsRTR-FD Optimization on manifolds Convergence Manopt
- 7.Huang, W., Gallivan, K., Absil, P.A.: A Broyden class of quasi-Newton methods for Riemannian optimization. Technical report UCL-INMA-2014.01, Université catholique de Louvain (2015)Google Scholar