Fast Manifold Learning Based on Riemannian Normal Coordinates

  • Anders Brun
  • Carl-Fredrik Westin
  • Magnus Herberthson
  • Hans Knutsson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3540)


We present a novel method for manifold learning, i.e. identification of the low-dimensional manifold-like structure present in a set of data points in a possibly high-dimensional space. The main idea is derived from the concept of Riemannian normal coordinates. This coordinate system is in a way a generalization of Cartesian coordinates in Euclidean space. We translate this idea to a cloud of data points in order to perform dimension reduction. Our implementation currently uses Dijkstra’s algorithm for shortest paths in graphs and some basic concepts from differential geometry. We expect this approach to open up new possibilities for analysis of e.g. shape in medical imaging and signal processing of manifold-valued signals, where the coordinate system is “learned” from experimental high-dimensional data rather than defined analytically using e.g. models based on Lie-groups.


Image Patch Geodesic Distance Klein Bottle Nonlinear Dimensionality Reduction Swiss Roll 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Diettrich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14. MIT Press, Cambridge (2002)Google Scholar
  2. 2.
    Donoho, D.L., Grimes, C.: Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. PNAS 100(10), 5591–5596 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Fischler, M.A., Bolles, R.C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM 24, 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Fletcher, P.T., Joshi, S., Lu, C., Pizer, S.: Gaussian Distributions on Lie Groups and Their Application to Statistical Shape Analysis. In: Proc. of Information Processing in Medical Imaging (IPMI), pp. 450–462 (2003)Google Scholar
  5. 5.
    Kohonen, T.: Self-organized formation of topologically correct feature maps. Biological Cybernetics 43, 59–69 (1982)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Lee, J., Shin, S.Y.: General construction of time-domain filters for orientation data. IEEE Transactions on Visualization and Computer Graphics 8(2), 119–128 (2002)CrossRefGoogle Scholar
  7. 7.
    Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  8. 8.
    Schölkopf, B., Smola, A., Müller, K.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10(5), 1299–1319 (1998)CrossRefGoogle Scholar
  9. 9.
    Swindale, N.V.: Visual cortex: Looking into a Klein bottle. Current Biology 6(7), 776–779 (1996)CrossRefGoogle Scholar
  10. 10.
    Tanaka, S.: Topological analysis of point singularities in stimulus preference maps of the primary visual cortex. Proc. R. Soc. Lond. B 261, 81–88 (1995)CrossRefGoogle Scholar
  11. 11.
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  12. 12.
    Weisstein, E.W.: ”Klein Bottle.” From MathWorld–A Wolfram Web Resource,

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Anders Brun
    • 1
    • 3
  • Carl-Fredrik Westin
    • 3
  • Magnus Herberthson
    • 2
  • Hans Knutsson
    • 1
  1. 1.Department of Biomedical EngineeringLinköpings UniversitetLinköpingSweden
  2. 2.Department of MathematicsLinköpings universitetLinköpingSweden
  3. 3.Laboratory of Mathematics in ImagingHarvard Medical SchoolBostonUSA

Personalised recommendations