Skip to main content
Log in

Geodesic Regression and the Theory of Least Squares on Riemannian Manifolds

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

This paper develops the theory of geodesic regression and least-squares estimation on Riemannian manifolds. Geodesic regression is a method for finding the relationship between a real-valued independent variable and a manifold-valued dependent random variable, where this relationship is modeled as a geodesic curve on the manifold. Least-squares estimation is formulated intrinsically as a minimization of the sum-of-squared geodesic distances of the data to the estimated model. Geodesic regression is a direct generalization of linear regression to the manifold setting, and it provides a simple parameterization of the estimated relationship as an initial point and velocity, analogous to the intercept and slope. A nonparametric permutation test for determining the significance of the trend is also given. For the case of symmetric spaces, two main theoretical results are established. First, conditions for existence and uniqueness of the least-squares problem are provided. Second, a maximum likelihood criteria is developed for a suitable definition of Gaussian errors on the manifold. While the method can be generally applied to data on any manifold, specific examples are given for a set of synthetically generated rotation data and an application to analyzing shape changes in the corpus callosum due to age.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Bookstein, F. L. (1986). Size and shape spaces for landmark data in two dimensions (with discussion). Statistical Science, 1(2), 181–242.

    Article  MATH  Google Scholar 

  • Boothby, W. M. (1986). An introduction to differentiable manifolds and Riemannian geometry (2nd ed.). London: Academic Press.

    MATH  Google Scholar 

  • do Carmo, M. (1992). Riemannian geometry. Boston: Birkhäuser.

    Book  MATH  Google Scholar 

  • Davis, B., Fletcher, P. T., Bullitt, E.,& Joshi, S. (2007). Population shape regression from random design data. In: Proceedings of IEEE international conference on computer vision. Rio de Janeiro: IEEE.

  • Driesen, N.,& Raz, N. (1995). The influence of sex, age, and handedness on corpus callosum morphology: A meta-analysis. Psychobiology, 23(3), 240–247.

    Google Scholar 

  • Dryden, I.,& Mardia, K. (1998). Statistical shape analysis. New York: John Wiley and Sons.

    MATH  Google Scholar 

  • Durrleman, S., Pennec, X., Trouvé, A., Gerig, G.,& Ayache, N. (2009). Spatiotemporal atlas estimation for developmental delay detection in longitudinal datasets (pp. 297–304). In: Medical image computing and computer-assisted Intervention intervention. Heidelberg: Springer.

  • Fletcher, P. T. (2011). Geodesic regression on Riemannian manifolds (pp. 75–86). In: MICCAI workshop on mathematical foundations of computational anatomy. Toronto: MFCA.

  • Fletcher, P. T., Lu, C.,& Joshi, S. (2003). Statistics of shape via principal geodesic analysis on lie groups (pp. 95–101). In: IEEE CVPR. Madison: IEEE.

  • Fréchet, M. (1948). Les éléments aléatoires de nature quelconque dans un espace distancié. Annales de l’Institut Henri Poincaré, 10(3), 215–310.

    Google Scholar 

  • Grenander, U. (1963). Probabilities on algebraic structures. New York: John Wiley and Sons.

    MATH  Google Scholar 

  • Helgason, S. (1978). Differential geometry, lie groups, and symmetric spaces. New York: Academic Press.

    MATH  Google Scholar 

  • Hinkle, J., Fletcher, P. T., Muralidharan, P., Joshi, S.,& (2012). Polynomial regression on Riemannian manifolds. In: European conference on computer vision. Florence: ECCV.

  • Huckemann, S., Hotz, T.,& Munk, A. (2010). Intrinsic shape analysis: Geodesic principal component analysis for Riemannian manifolds modulo lie group actions. Statistica Sinica, 20, 1–100.

    MathSciNet  MATH  Google Scholar 

  • Jupp, P. E.,& Kent, J. T. (1987). Fitting smooth paths to spherical data. Applied Statistics, 36(1), 34–46.

    Article  MathSciNet  MATH  Google Scholar 

  • Karcher, H. (1977). Riemannian center of mass and mollifier smoothing. Communication on Pure and Applied Mathematics, 30, 509–541.

    Article  MathSciNet  MATH  Google Scholar 

  • Kendall, D. G. (1984). Shape manifolds, procrustean metrics, and complex projective spaces. Bulletin of the London Mathematical Society, 16, 18–121.

    Google Scholar 

  • Kendall, W. S. (1990). Probability, convexity, and harmonic maps with small image I: Uniqueness and fine existence. Proceedings of the London Mathematical Society, 3(61), 371–406.

    Article  MathSciNet  Google Scholar 

  • Kenobi, K., Dryden, I. L.,& Le, H. (2010). Shape curves and geodesic modelling. Biometrika, 97(3), 567–584.

    Article  MathSciNet  MATH  Google Scholar 

  • Klassen, E., Srivastava, A., Mio, W.,& Joshi, S. (2004). Analysis of planar shapes using geodesic paths on shape spaces. IEEE PAMI, 26(3), 372–383.

    Article  Google Scholar 

  • Kume, A., Dryden, I. L.,& Le, H. (2007). Shape-space smoothing splines for planar landmark data. Biometrika, 94(3), 513–528.

    Google Scholar 

  • Mardia, K. V. (2000). Directional statistics. New York: John Wiley and Sons.

  • Michor, P. W.,& Mumford, D. (2006). Riemannian geometries on spaces of plane curves. Journal of European Mathematical Society, 8, 1–48.

    Google Scholar 

  • Miller, M. (2004). Computational anatomy: Shape, growth, and atrophy comparison via diffeomorphisms. NeuroImage, 23, S19–S33.

    Article  Google Scholar 

  • Muralidharan, P.,& Fletcher, P. T. (2012). Sasaki metrics for analysis of longitudinal data on manifolds. In: IEEE conference on computer vision and pattern recognition. Providence: CVPR.

  • Niethammer, M., Huang, Y.,& Viallard, F. X. (2011). Geodesic regression for image time-series. In: Proceedings of medical image computing and computer assisted intervention. Toronto: MICCAI.

  • Pennec, X. (2006). Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 25(1), 1–12.

    Google Scholar 

  • Shi, X., Styner, M., Lieberman, J., Ibrahim, J., Lin, W.,& Zhu, H. (2009). Intrinsic regression models for manifold-valued data. Journal of American Statistical Association, 5762, 192–199.

    Google Scholar 

  • Sommer, S., Lauze, F., Nielsen, M. (2010). The differential of the exponential map, Jacobi fields and exact principal geodesic, analysis. Retrieved October 7, 2010 from arXiv:10081902v3.

  • Trouvé, A.,& Vialard, F. X. (2010). A second-order model for time-dependent data interpolation: Splines on shape spaces. In: MICCAI STIA workshop. Beijing: MICCAI.

  • Younes, L. (1998). Computable elastic distances between shapes. SIAM Journal of Applied Mathematics, 58, 565–586.

    Article  MathSciNet  MATH  Google Scholar 

  • Younes, L. (2006). Jacobi fields in groups of diffeomorphisms and applications. Quarterly of Applied Mathematics. 65, 113–134.

    Google Scholar 

Download references

Acknowledgments

This work was supported by NSF CAREER Grant 1054057.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Thomas Fletcher.

Appendix A: Proof of Theorem 1

Appendix A: Proof of Theorem 1

When \(\kappa = 0,\) it is clear that the vector field \(J\) changes linearly in \(t,\) giving the desired result \(DJ/dt = 0.\) Therefore, it suffices to consider only the cases where \(\kappa \ne 0.\) Let’s first consider the case where \(\kappa > 0.\) Then we can write

$$\begin{aligned} J(s, t)&= \cos \left(s \sqrt{\kappa } L(t)\right) X(t) E(s, t)\\&+ \frac{\sin \left(s \sqrt{\kappa } L(t)\right)}{\sqrt{\kappa }} Y(t) E(s, t)\\&+ Z(t) T(s, t) + s W(t) T(s, t). \end{aligned}$$

Our goal is to compute the normal and tangential components of \(DJ/dt.\) We will use the identities

$$\begin{aligned} \nonumber \left\langle \frac{DJ}{dt}, T\right\rangle&= \frac{1}{L} \left\langle \frac{DJ}{dt}, V\right\rangle \\&= \frac{1}{L}\left(\frac{d}{dt} \left\langle J, V \right\rangle - \left\langle J, \frac{DV}{dt} \right\rangle \right). \end{aligned}$$
(19)
$$\begin{aligned} \left\langle \frac{DJ}{dt}, E\right\rangle = \frac{d}{dt} \left\langle J, E \right\rangle - \left\langle J, \frac{DE}{dt} \right\rangle , \end{aligned}$$
(20)

Tangential Component: We start by noting that

$$\begin{aligned} \frac{DV}{dt} = \frac{D}{dt} \frac{d\alpha }{ds} = \frac{D}{ds} \frac{d\alpha }{dt} = \frac{DJ}{ds}, \end{aligned}$$

and we can compute, using \(k = \sqrt{\kappa } L,\)

$$\begin{aligned} \frac{DV}{dt} = \frac{DJ}{ds} = - k \sin (s k) X E + \cos (sk) YE + WT. \end{aligned}$$
(21)

This gives the second term in (19),

$$\begin{aligned} \left\langle J, \frac{DV}{dt} \right\rangle&= \frac{1}{2}\sin (2 s k) \left(\frac{1}{k} Y^2 - k X^2\right)\\&+\cos (2 s k) X Y + W Z + s W^2. \end{aligned}$$

The first term in (19) is given by

$$\begin{aligned} \frac{d}{dt} \left\langle J, V \right\rangle&= \frac{d}{dt} \left(LZ + s L W \right)\\&= \frac{dL}{dt} (Z + s W) + L \frac{dZ}{dt} + s L \frac{dW}{dt}. \end{aligned}$$

We now compute each of the derivatives in this equation. Starting with \(dL/dt,\) and using the previous result in (21), we get

$$\begin{aligned} \frac{dL}{dt} = \frac{d}{dt} \Vert V \Vert = \frac{1}{L} \left\langle \frac{DV}{dt}, V \right\rangle = W. \end{aligned}$$

Using the fact that \(T\) is a unit vector field, we get

$$\begin{aligned} \frac{DT}{dt}&= \left\langle \frac{DT}{dt}, E \right\rangle E = -\frac{1}{L} \left\langle \frac{DV}{dt}, E \right\rangle E\\&= \left(\sin \left(s \sqrt{\kappa } L\right) X - \frac{\cos \left(s \sqrt{\kappa } L\right)}{L} Y \right) E. \end{aligned}$$

Evaluating this at \(s = 0,\) gives

$$\begin{aligned} \frac{DT}{dt} (0, t) = -\frac{Y(t)}{L(t)} E(0, t) \end{aligned}$$

Denoting \(\tau _t\) as parallel translation along \(p(t),\) we can write

$$\begin{aligned} Z(t)&= \left\langle Z(0)\, \tau _t T(0,0) + X(0)\, \tau _t E(0, 0),\, T(0, t) \right\rangle ,\\ W(t)&= \left\langle W(0)\, \tau _t T(0,0) + Y(0)\, \tau _t E(0, 0),\, T(0, t) \right\rangle . \end{aligned}$$

Using this, and the fact that \(Z, W\) are constant in \(s,\) allows us to compute

$$\begin{aligned} \frac{dZ}{dt}&= -\frac{X Y}{L},\\ \frac{dW}{dt}&= -\frac{Y^2}{L}. \end{aligned}$$

We put these together to get

$$\begin{aligned} \frac{d}{dt} \left\langle J, V \right\rangle = W Z - X Y + s W^2 - s Y^2. \end{aligned}$$

Finally, the tangential component of \(DJ/dt\) is given by

$$\begin{aligned} \left\langle \frac{DJ}{dt}, T \right\rangle&= \frac{\sqrt{\kappa }}{2} \sin \left(2 s L\right) \left(X^2 - \frac{Y^2}{\kappa L^2} \right) \nonumber \\&- \frac{\cos \left(2 s \sqrt{\kappa } L\right)}{L} X Y \nonumber \\&+ \frac{s Y^2}{L} + \frac{X Y}{L}. \end{aligned}$$
(22)

Normal Component: Similar to the computation above for \(Z, W,\) but now for the normal components \(X,Y,\) we get

$$\begin{aligned} X(t)&= \left\langle X(0)\, \tau _t E(0, 0) + Z(0)\, \tau _t T(0, 0), E(0, t) \right\rangle ,\\ Y(t)&= \left\langle Y(0)\, \tau _t E(0, 0) + W(0) \tau _t T(0, 0), E(0, t) \right\rangle . \end{aligned}$$

Using the fact that \(E\) is a unit vector field, we get

$$\begin{aligned} \frac{DE}{dt}&= \left\langle \frac{DE}{dt}, T \right\rangle T = -\frac{1}{L} \left\langle \frac{DV}{dt}, E \right\rangle T\\&= \left(\sin (s k) X - \frac{\cos (s k)}{L} Y\right) T. \end{aligned}$$

Evaluating this at \(s = 0,\) gives

$$\begin{aligned} \frac{DE}{dt} (0, t) = -\frac{Y(t)}{L(t)} T(0, t) \end{aligned}$$

Again, using the fact that \(X, Y\) are constant in \(s,\) this gives us

$$\begin{aligned} \frac{dX}{dt}&= -\frac{Y Z}{L},\\ \frac{dY}{dt}&= -\frac{Y W}{L}. \end{aligned}$$

The first term in (20) is calculated as

$$\begin{aligned} \frac{d}{dt} \left\langle J, E \right\rangle&= \,\, \frac{d}{dt} \left(\cos \left( s \sqrt{\kappa } L \right) X + \frac{\sin \left( s \sqrt{\kappa } L \right)}{\sqrt{\kappa } L} Y \right)\\&= \,\, -s \sqrt{\kappa } \sin \left( s \sqrt{\kappa } L \right) X W - \frac{\cos \left( s \sqrt{\kappa } L \right)}{L} Y Z\\&+\,\, \frac{s \sqrt{\kappa } L \cos \left( s \sqrt{\kappa } L \right) - 2 \sin \left( s \sqrt{\kappa } L \right)}{\sqrt{\kappa } L^2} Y W. \end{aligned}$$

Again, using the fact that \(E\) is a unit vector field, we have

$$\begin{aligned} \frac{DE}{dt}&= \frac{1}{L} \left\langle \frac{DE}{dt}, V \right\rangle T = -\frac{1}{L} \left\langle E, \frac{DV}{dt} \right\rangle T\\&= \left(\sqrt{\kappa } \sin \left(s \sqrt{\kappa } L\right) X - \frac{\cos \left(s \sqrt{\kappa } L\right)}{L} Y\right) T \end{aligned}$$

The second term in (20) is now given by

$$\begin{aligned} \left\langle J, \frac{DE}{dt} \right\rangle&= \left(\sqrt{\kappa } \sin \left(s \sqrt{\kappa } L\right) X - \frac{\cos \left(s \sqrt{\kappa } L\right)}{L} Y\right)\\&\times \left( Z + s W \right). \end{aligned}$$

Putting this together, we get the normal component of \(DJ/dt\) to be

$$\begin{aligned} \left\langle \frac{DJ}{dt}, E \right\rangle&= - 2 s \sqrt{\kappa } \sin \left(s \sqrt{\kappa } L\right) X W \nonumber \\&- \sqrt{\kappa } \sin \left(s \sqrt{\kappa } L\right) X Z \nonumber \\&+ \frac{2s}{L} \cos \left(s \sqrt{\kappa } L\right) Y W \nonumber \\&- \frac{2}{\sqrt{\kappa } L^2} \sin \left(s \sqrt{\kappa } L\right) Y W. \end{aligned}$$
(23)

Negative Sectional Curvature: Now consider the case when the sectional curvature is negative, i.e., \(\kappa < 0.\) The Jacobi field is given by

$$\begin{aligned} J(s, t)&= \cosh \left(s \sqrt{-\kappa } L\right) X(t) E(s, t)\\&+ \frac{\sinh \left(s \sqrt{-\kappa } L\right)}{\sqrt{-\kappa } L} Y(t) E(s, t)\\&+ Z(t) T(s, t) + s W(t) T(s, t). \end{aligned}$$

The derivation of \(DJ/dt\) in this case proceeds almost identically to the positive curvature case, taking care to handle the sign difference when differentiating \(\cosh .\) The result is

$$\begin{aligned} \left\langle \frac{DJ}{dt}, T \right\rangle&= \frac{\sqrt{-\kappa }}{2} \sinh \left(2 s \sqrt{-\kappa } L \right) \left(X^2 - \frac{Y^2}{\kappa L^2} \right) \nonumber \\&\, + \frac{\cosh \left( 2 s \sqrt{-\kappa } L \right)}{L} X Y \nonumber \\&\,- s \frac{Y^2}{L} - \frac{X Y}{L} \end{aligned}$$
(24)
$$\begin{aligned} \left\langle \frac{DJ}{dt}, E \right\rangle&= 2 s \sqrt{-\kappa } \sinh \left(s \sqrt{-\kappa } L\right) X W \nonumber \\&+ \sqrt{-\kappa } \sinh \left(s \sqrt{-\kappa } L\right) X Z \nonumber \\&+ \frac{2s}{L} \cosh \left(s \sqrt{-\kappa } L\right) Y W \nonumber \\&- \frac{2}{\sqrt{\kappa } L^2} \sin \left(s \sqrt{-\kappa } L\right) Y W. \end{aligned}$$
(25)

The final formulas for the second derivative of the exponential map are given by evaluation at \((s, t) = (1, 0)\) in Eqs.  (22)–(23).\(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Thomas Fletcher, P. Geodesic Regression and the Theory of Least Squares on Riemannian Manifolds. Int J Comput Vis 105, 171–185 (2013). https://doi.org/10.1007/s11263-012-0591-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-012-0591-y

Keywords

Navigation