Advertisement

Non-redundant Spectral Dimensionality Reduction

  • Yochai Blau
  • Tomer Michaeli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10534)

Abstract

Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.

Supplementary material

460683_1_En_16_MOESM1_ESM.pdf (436 kb)
Supplementary material 1 (pdf 435 KB)

References

  1. 1.
    Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by independent component analysis. IEEE Trans. Neural Netw. 13(6), 1450–1464 (2002)CrossRefGoogle Scholar
  2. 2.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)CrossRefzbMATHGoogle Scholar
  3. 3.
    Belkin, M., Niyogi, P.: Semi-supervised learning on Riemannian manifolds. Mach. Learn. 56(1–3), 209–239 (2004)CrossRefzbMATHGoogle Scholar
  4. 4.
    Bengio, Y., Paiement, J.F., Vincent, P., Delalleau, O., Le Roux, N., Ouimet, M.: Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and spectral clustering. In: Advances in Neural Information Processing Systems (NIPS) 16, pp. 177–184 (2004)Google Scholar
  5. 5.
    Brun, A., Park, H.-J., Knutsson, H., Westin, C.-F.: Coloring of DT-MRI fiber traces using Laplacian eigenmaps. In: Moreno-Díaz, R., Pichler, F. (eds.) EUROCAST 2003. LNCS, vol. 2809, pp. 518–529. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45210-2_47 CrossRefGoogle Scholar
  6. 6.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  7. 7.
    Chang, H., Yeung, D.Y., Xiong, Y.: Super-resolution through neighbor embedding. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, p. I (2004)Google Scholar
  8. 8.
    Coifman, R.R., Lafon, S.: Diffusion maps. Appl. Comput. Harmon. Anal. 21(1), 5–30 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Donoho, D.L., Grimes, C.: Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. 100(10), 5591–5596 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Dsilva, C.J., Talmon, R., Coifman, R.R., Kevrekidis, I.G.: Parsimonious representation of nonlinear dynamical systems through manifold learning: a chemotaxis case study. Appl. Comput. Harmon. Anal. (2015, in press)Google Scholar
  11. 11.
    Elgammal, A., Lee, C.S.: Inferring 3D body pose from silhouettes using activity manifold learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. II–681 (2004)Google Scholar
  12. 12.
    Geng, X., Zhan, D.C., Zhou, Z.H.: Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 35(6), 1098–1107 (2005)CrossRefGoogle Scholar
  13. 13.
    Gerber, S., Tasdizen, T., Whitaker, R.: Robust non-linear dimensionality reduction using successive 1-dimensional Laplacian eigenmaps. In: International Conference on Machine Learning (ICML), pp. 281–288 (2007)Google Scholar
  14. 14.
    Goldberg, Y., Zakai, A., Kushnir, D., Ritov, Y.: Manifold learning: the price of normalization. J. Mach. Learn. Res. 9, 1909–1939 (2008)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Guo, G., Fu, Y., Dyer, C.R., Huang, T.S.: Image-based human age estimation by manifold learning and locally adjusted robust regression. IEEE Trans. Image Process. 17(7), 1178–1188 (2008)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Ham, J., Lee, D.D., Mika, S., Schölkopf, B.: A kernel view of the dimensionality reduction of manifolds. In: International Conference on Machine Learning (ICML), p. 47 (2004)Google Scholar
  18. 18.
    He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using Laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)CrossRefGoogle Scholar
  19. 19.
    Hyvärinen, A.: Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 10(3), 626–634 (1999)CrossRefGoogle Scholar
  20. 20.
    Hyvärinen, A., Pajunen, P.: Nonlinear independent component analysis: existence and uniqueness results. Neural Netw. 12(3), 429–439 (1999)CrossRefGoogle Scholar
  21. 21.
    Isola, P., Zoran, D., Krishnan, D., Adelson, E.H.: Crisp boundary detection using pointwise mutual information. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 799–814. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10578-9_52 Google Scholar
  22. 22.
    Jutten, C., Herault, J.: Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture. Sig. Process. 24(1), 1–10 (1991)CrossRefzbMATHGoogle Scholar
  23. 23.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  24. 24.
    Lee, C.S., Elgammal, A.: Modeling view and posture manifolds for tracking. In: International Conference on Computer Vision (ICCV), pp. 1–8 (2007)Google Scholar
  25. 25.
    Lim, H., Camps, O.I., Sznaier, M., Morariu, V.I.: Dynamic appearance modeling for human tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 751–757 (2006)Google Scholar
  26. 26.
    Lim, I.S., de Heras Ciechomski, P., Sarni, S., Thalmann, D.: Planar arrangement of high-dimensional biomedical data sets by Isomap coordinates. In: IEEE Symposium on Computer-Based Medical Systems, pp. 50–55 (2003)Google Scholar
  27. 27.
    Nadaraya, E.A.: On estimating regression. Theory Probab. Appl. 9(1), 141–142 (1964)CrossRefzbMATHGoogle Scholar
  28. 28.
    Pless, R.: Image spaces and video trajectories: using Isomap to explore video sequences. In: International conference on Computer Vision (ICCV), vol. 3, pp. 1433–1440 (2003)Google Scholar
  29. 29.
    Raytchev, B., Yoda, I., Sakaue, K.: Head pose estimation by nonlinear manifold learning. In: International Conference on Pattern Recognition (ICPR), vol. 4, pp. 462–466 (2004)Google Scholar
  30. 30.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  31. 31.
    Rubinstein, M., Gutierrez, D., Sorkine, O., Shamir, A.: A comparative study of image retargeting. ACM Trans. Graph. (TOG) 29, 160 (2010)CrossRefGoogle Scholar
  32. 32.
    Schölkopf, B., Smola, A., Müller, K.-R.: Kernel principal component analysis. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997).  https://doi.org/10.1007/BFb0020217 Google Scholar
  33. 33.
    Singer, A., Coifman, R.R.: Non-linear independent component analysis with diffusion maps. Appl. Comput. Harmon. Anal. 25(2), 226–239 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Souvenir, R., Zhang, Q., Pless, R.: Image manifold interpolation using free-form deformations. In: IEEE International Conference on Image Processing (ICIP), pp. 1437–1440 (2006)Google Scholar
  35. 35.
    Tenenbaum, J.B., De Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  36. 36.
    Van Der Maaten, L., Postma, E., Van den Herik, J.: Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 66–71 (2009)Google Scholar
  37. 37.
    Vlachos, M., Domeniconi, C., Gunopulos, D., Kollios, G., Koudas, N.: Non-linear dimensionality reduction techniques for classification and visualization. In: International Conference on Knowledge Discovery and Data Mining, pp. 645–651 (2002)Google Scholar
  38. 38.
    Wang, Q., Xu, G., Ai, H.: Learning object intrinsic structure for robust visual tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. II-227 (2003)Google Scholar
  39. 39.
    Watson, G.S.: Smooth regression analysis. Sankhyā: Indian J. Stat. Ser. A 26(4), 359–372 (1964)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Zhang, Z., Chow, T.W., Zhao, M.: Trace ratio optimization-based semi-supervised nonlinear dimensionality reduction for marginal manifold visualization. IEEE Trans. Knowl. Data Eng. 25(5), 1148–1161 (2013)CrossRefGoogle Scholar
  41. 41.
    Zhang, Z., Zha, H.: Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. J. Shanghai Univ. 8(4), 406–424 (2004)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Technion–Israel Institute of TechnologyHaifaIsrael

Personalised recommendations