Abstract
Manifold learning methods are one of the most exciting developments in machine learning in recent years. The central idea underlying these methods is that although natural data is typically represented in very high-dimensional spaces, the process generating the data is often thought to have relatively few degrees of freedom. A natural mathematical characterization of this intuition is to model the data as lying on or near a low-dimensional manifold
Recently, manifold learning has also been applied in utilizing both labeled and unlabeled data for classification, that is, semi-supervised learning. For example, once the manifold is estimated, then the Laplace–Beltrami operator may be used to provide a basis for maps intrinsically defined on this manifold and then the appropriate classifier (map) is estimated on the basis of the labeled examples.
In this chapter, we will discuss the manifold perspective of visual pattern representation, dimensionality reduction, and classification problems, as well as a survey that includes manifold learning concepts, technical mechanisms, and algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Balasubramanian M, Schwartz E, Tenenbaum J, de Silva V, Langford J (2002) The isomap algorithm and topological stability. Science 295:7
Belkin M, Niyogi P (2002) Laplacian Eigenmaps and spectral techniquesfor embedding and clustering. Advances in Neural Information Processing Systems 1:585–592
Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15:1373–1396
Belkin M, Niyogi P (2004) Semi-supervised learning on Riemannian manifolds. Machine Learning 56(1):209–239
Bengio Y, Paiement J, Vincent P, Delalleau O, Le Roux N, Ouimet M (2004) Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. In: Advances in Neural Information Processing Systems 16:177–186. MIT Press, Bradford Book
Borg I, Groenen P (2003) Modern multidimensional scaling: theory and applications. Journal of Educational Measurement 40(3):277–280
Boykov Y, Jolly M (2001) Interactive graph cuts for optimal boundary and region segmentation of objects in N-Dimages. In: International Conference on Computer Vision, Vancouver, BC, Canada, vol 1, pp 105–112
Brand M (2003) Charting a manifold. Advances in Neural Information Processing Systems 15:985–992. MIT Press, Cambridge
Coifman R, Lafon S, Lee A, Maggioni M, Nadler B, Warner F, Zucker S (2005) Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proceedings of the National Academy of Sciences 102(21):7426–7431
Donoho D, Grimes C (2003) Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences 100(10):5591–5596
Efros A, Isler V, Shi J, Visontai M (2005) Seeing through water. In: Advances in Neural Information Processing Systems, MIT Press, Cambridge
Ghahramani Z, Hinton G (1997) The EM Algorithm for Mixtures of Factor Analyzers. University of Toronto Technical Report CRG-TR-96-1
Law M, Jain A (2006) Incremental nonlinear dimensionality reduction by manifold learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 28:377–391
Lin T, Zha H, Lee S (2006) Riemannian manifold learning for nonlinear dimensionality reduction. Lecture Notes in Computer Science 3951:44
Mika S, Ratsch G, Weston J, Scholkopf B, Mullers K (1999) Fisher discriminant analysis with kernels. In: Neural Networks for Signal Processing IX, IEEE Signal Processing Society Workshop, pp 41–48
Roweis S, Ghahramani Z (1999) A Unifying review of linear gaussian models. Neural Computation 11(2):305–345
Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding Science 290:2323–2326
Roweis S, Saul L, Hinton G (2002) Global Coordination of Local Linear Models. In: Advances in Neural Information Processing Systems, MIT Press
Salzmann M, Pilet J, Ilic S, Fua P (2007) Surface deformation models for nonrigid 3D shape recovery. IEEE Transactions on Pattern Analysis and Machine Intelligence 29:1481–1487
Schoelkopf B, Smola A, Mueller K (1997) Kernel principal component analysis. Lecture Notes in Computer Science, Springer, Berline, pp 583–588
Sha F, Saul L (2005) Analysis and extension of spectral methods for nonlinear dimensionality reduction. In: International Workshop on Machine Learning, vol 22
de Silva V, Tenenbaum J (2003) Global versus local methods in nonlinear dimensionality reduction. In: Advances in Neural Information Processing Systems 15:721–728. MIT Press, Cambridge
Weinberger K, Saul L (2006) Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision 70(1):77–90
Zhang Z, Zha H (2002) Principal manifolds and nonlinear dimension reduction via local tangent space alignment. Arxiv preprint csLG/0212008
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2009 Springer-Verlag London Limited
About this chapter
Cite this chapter
Zheng, N., Xue, J. (2009). Manifold Learning. In: Statistical Learning and Pattern Analysis for Image and Video Processing. Advances in Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-84882-312-9_4
Download citation
DOI: https://doi.org/10.1007/978-1-84882-312-9_4
Publisher Name: Springer, London
Print ISBN: 978-1-84882-311-2
Online ISBN: 978-1-84882-312-9
eBook Packages: Computer ScienceComputer Science (R0)