Abstract

Hessian locally linear embedding (HLLE) achieves linear embedding by minimizing the Hessian functional on the manifold where the data set resides. The conceptual framework of HLLE may be viewed as a modification of the Laplacian Eigenmaps framework. Let H be the observed high-dimensional data which reside on a low-dimentional manifold M and h be the coordinate mapping on M so that Y = h(H)is a DR of H. In Laplacian eigenmaps method, h is found in the numerically null space of the Laplace-Beltrsmi operator on M, while in Hessian locally linear embedding, it is found in the null space of the Hessian. Since HLLE embedding is locally linear, it works well for the data lying on a manifold that may not be convex. Compared with other nonlinear DR methods, such as Isomaps that need the data set lying on a convex manifold, HLLE can be applied to data in a wider range. The chapter is organized as follows. In Section 13.1, we describe the Hessian locally linear embedding method and its mathematical background. In Sections 13.2, the HLLE DR algorithm is introduced. The experiments of the algorithm are included in Section 13.3.

Keywords

Null Space Neighborhood Size Linear Embedding Tangent Hyperplane Noise Standard Deviation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Donoho, D.L., Grimes, C.: Hessian eigenmaps: New locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA 100, 5591–5596 (2003).MathSciNetMATHCrossRefGoogle Scholar
  2. [2]
    Belkin, M.: Problems of Learning on Manifolds. Ph.D. thesis, The University of Chicago (2003).Google Scholar
  3. [3]
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003).MATHCrossRefGoogle Scholar
  4. [4]
    de Silva, V., Tenenbaum, J.B.: Global versus local methods in nonlinear dimensionality reduction. In: S. Becker, S. Thrun, K. Obermayer (eds.) Neural Information Processing Systems (NIPS 2002), pp. 705–712. MIT Press (2002).Google Scholar
  5. [5]
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000).CrossRefGoogle Scholar
  6. [6]
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000).CrossRefGoogle Scholar
  7. [7]
    Saul, L.K., Roweis, S.T.: Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research 4, 119–155 (2003).MathSciNetGoogle Scholar

Copyright information

© Higher Education Press, Beijing and Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jianzhong Wang
    • 1
  1. 1.Department of Mathematics and StatisticsSam Houston State UniversityHuntsvilleUSA

Personalised recommendations