Chapter

Algorithmic Advances in Riemannian Geometry and Applications

Part of the series Advances in Computer Vision and Pattern Recognition pp 73-91

Date:

Geometric Optimization in Machine Learning

  • Suvrit SraAffiliated withLaboratory for Information & Decision Systems (LIDS), Massachusetts Institute of Technology Email author 
  • , Reshad HosseiniAffiliated withSchool of ECE, College of Engineering, University of Tehran

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Machine learning models often rely on sparsity, low-rank, orthogonality, correlation, or graphical structure. The structure of interest in this chapter is geometric, specifically the manifold of positive definite (PD) matrices. Though these matrices recur throughout the applied sciences, our focus is on more recent developments in machine learning and optimization. In particular, we study (i) models that might be nonconvex in the Euclidean sense but are convex along the PD manifold; and (ii) ones that are neither Euclidean nor geodesic convex but are nevertheless amenable to global optimization. We cover basic theory for (i) and (ii); subsequently, we present a scalable Riemannian limited-memory BFGS algorithm (that also applies to other manifolds). We highlight some applications from statistics and machine learning that benefit from the geometric structure studies.