Optical Remote Sensing pp 207-234

Part of the Augmented Vision and Reality book series (Augment Vis Real, volume 3)

Exploring Nonlinear Manifold Learning for Classification of Hyperspectral Data

Chapter

Abstract

Increased availability of hyperspectral data and greater access to advanced computing have motivated development of more advanced methods for exploitation of nonlinear characteristics of these data. Advances in manifold learning developed within the machine learning community are now being adapted for analysis of hyperspectral data. This chapter investigates the performance of popular global (Isomap and KPCA) and local manifold nonlinear learning methods (LLE, LTSA, LE) for dimensionality reduction in the context of classification. Experiments were conducted on hyperspectral data acquired by multiple sensors at various spatial resolutions over different types of land cover. Nonlinear dimensionality reduction methods often outperformed linear extraction methods and rivaled or were superior to those obtained using the full dimensional data.

Keywords

Manifold learning Dimensionality reduction Classification Hyperspectral Isometric feature mapping Kernel principal component analysis Locally linear embedding Local tangent space alignment Laplacian eigenmaps 

References

  1. 1.
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  2. 2.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by local linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  3. 3.
    Donoho, D.L., Grimes, C.: Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA 100(10), 5591–5596 (2003)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Saul, L.K., Roweis, S.T.: Think globally, fit locally: unsupervised learning of low dimensional manifolds. J. Mach. Learn Res. 4, 119–155 (2003)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Agrafiotis, D.K.: Stochastic proximity embedding. J. Comput. Chem. 24(10), 1215–1221 (2003)CrossRefGoogle Scholar
  6. 6.
    Bachmann, C.M., Ainsworth, T.L., Fusina, R.A.: Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 43(3), 441–454 (2005)CrossRefGoogle Scholar
  7. 7.
    Bachmann, C.M., Ainsworth, T.L., Fusina, R.A.: Improved manifold coordinate representations of large-scale hyperspectral scenes. IEEE Trans. Geosci. Remote Sens. 44(10), 2786–2803 (2006)CrossRefGoogle Scholar
  8. 8.
    Bachmann, C.M., Ainsworth, T.L., Fusina, R.A., Montes, M.J., Bowles, J.H., Korwan, D.R., et al.: Bathymetric retrieval from hyperspectral imagery using manifold coordinate representations. IEEE Trans. Geosci. Remote Sens. 47(3), 884–897 (2009)CrossRefGoogle Scholar
  9. 9.
    Mohan, A., Sapiro, G., Bosch, E.: Spatially coherent nonlinear dimensionality reduction and segmentation of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 4(2), 206–210 (2007)CrossRefGoogle Scholar
  10. 10.
    Han, T., Goodenough, D.G.: Investigation of nonlinearity in hyperspectral imagery using surrogate data methods. IEEE Trans. Geosci. Remote Sens. 46(10), 2840–2847 (2008)CrossRefGoogle Scholar
  11. 11.
    Chen, Y., Crawford, M.M., Ghosh, J.: Improved nonlinear manifold learning for land cover classification via intelligent landmark selection. In: IEEE Int. Geosci. Remote Sens. Symp., Denver, Colorado, USA, pp. 545–548, July 2006Google Scholar
  12. 12.
    Kim, W., Chen, Y., Crawford, M.M., Tilton, J.C., Ghosh, J.: Multiresolution manifold learning for classification of hyperspectral data. In: IEEE Int. Geosci. Remote Sens. Symp., Barcelona, Spain, pp. 3785–3788, July 2007Google Scholar
  13. 13.
    Kim, W., Crawford, M.M., Ghosh, J.: Spatially adapted manifold learning for classification of hyperspectral imagery with insufficient labeled data. In: IEEE Int. Geosci. Remote Sens. Symp., Boston, Massachusetts, USA, vol. 1, pp. I213–I216, July 2008Google Scholar
  14. 14.
    Kim, W., Crawford, M.M.: A novel adaptive classification method for hyperspectral data using manifold regularization kernel machines. In: First Workshop Hyperspectral Image Signal Process Evol. Remote Sens., Grenoble, France, pp. 1–4, August 2009Google Scholar
  15. 15.
    Crawford, M.M., Kim, W.: Manifold learning for multi-classifier systems via ensembles. Mult. Classif. Syst. 5519, 519–528 (2009)CrossRefGoogle Scholar
  16. 16.
    He, J., Zhang, L., Wang, Q., Li, Z.: Using diffusion geometric coordinates for hyperspectral imagery representation. IEEE Geosci. Remote Sens. Lett. 6(4), 767–771 (2009)CrossRefGoogle Scholar
  17. 17.
    Fauvel, M., Chanussot, J., Benediktsson, J.A.: Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas. EURASIP J. Adv. Signal Process (2009). doi:10.1155/2009/783194
  18. 18.
    Ma, L., Crawford, M.M., Tian, J.W.: Local manifold learning based k-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens, 48(11), 4099–4109 (2010)Google Scholar
  19. 19.
    Ma, L., Crawford, M.M., Tian, J.W.: Anomaly detection for hyperspectral images based on robust locally linear embedding. J. Infrared Millimeter Terahertz Waves 31(6), 753–762 (2010)Google Scholar
  20. 20.
    van der Maaten, L.J.P., Postma, E., Herik, H.J.: Dimensionality reduction: a comparative review. Tech. Rep. http://homepage.tudelft.nl/19j49/Publications.html (2009). Accessed Oct 2009
  21. 21.
    Yan, S., Dong, X., Zhang, B., Zhang, H.J.: Graph embedding: a general framework for dimensionality reduction. In: IEEE Comput. Soc. Conf. Comp. Vis. Pattern Recognit. (CVPR 2005), San Diego, CA, USA, vol. 2, pp. 800–837, June 2005Google Scholar
  22. 22.
    Yan, S., Dong, X., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)CrossRefGoogle Scholar
  23. 23.
    Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1(1), 269–271 (1959)CrossRefMATHMathSciNetGoogle Scholar
  24. 24.
    de Silva, V., Tenenbaum, J.B.: Global versus local methods in nonlinear dimensionality reduction. In: Adv. Neural Inf. Process Syst., Hyatt Regency, Vancouver, BC, Canada, vol. 15, pp. 713–720, December 2002Google Scholar
  25. 25.
    Chen, Y., Crawford, M.M., Ghosh, J.: Applying nonlinear manifold learning to hyperspectral data for land cover classification. In: IEEE Int. Geosci. Remote Sens. Symp., Seoul, Korea, pp. 4311–4314, June 2005Google Scholar
  26. 26.
    Schölkopf, B., Smola, A.J., Muller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 583–588 (1998)CrossRefGoogle Scholar
  27. 27.
    Zhang, Z., Zha, H.: Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J. Sci. Comput. 26(1), 313–338 (2004)CrossRefMATHMathSciNetGoogle Scholar
  28. 28.
    Wang, J.: Improve local tangent space alignment using various dimensional local coordinates. Neurocomputing 71(16–18), 3575–3581 (2008)CrossRefGoogle Scholar
  29. 29.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)CrossRefMATHGoogle Scholar
  30. 30.
    Ham, J., Lee, D.D., Mika, S., Schölkopf, B.: A kernel view of the dimensionality reduction of manifolds. In: Int. Conf. Mach. Learn., ACM, Banff, Alberta, Canada, vol. 69, pp. 369–376, August 2004Google Scholar
  31. 31.
    Bengio, Y., Delalleau, O., Roux, N.L., Paiement, J.F., Vincent, P., Ouimet, M.: Learning eigenfunctions links spectral embedding and kernel PCA. Neural Comput. 16(10), 2197–2219 (2004)CrossRefMATHGoogle Scholar
  32. 32.
    de Ridder, D., Duin, R.P.W.: Locally linear embedding for classification. Tech. Rep. (PH-2002-01), Delft University of Technology, Delft, The Netherlands, 2002Google Scholar
  33. 33.
    Li, H.Y., Teng, L., Chen, W.B., Shen, I.F.: Supervised learning on local tangent space. Lect. Notes Comput. Sci. 3496, 546–551 (2005)CrossRefGoogle Scholar
  34. 34.
    Bengio, Y., Paiement, J.F., Vincent, P.: Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. In: Adv. Neural Inf. Process Syst., 16, Cambridge, MA, MIT Press, pp. 177–184, July 2003Google Scholar
  35. 35.
    Ma, L., Crawford, M.M., Tian, J.W.: Generalized supervised local tangent space alignment for hyperspectral image classification. Electron. Lett, 46(7):497–498 (2010) CrossRefGoogle Scholar
  36. 36.
    Yuhas, R.H., Goetz, A.F.H., Boardman, J.W.: Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In: Annu. JPL Airborne Geosci. Workshop, Pasadena, CA, vol. 1, pp. 147–149, June 1992Google Scholar
  37. 37.
    Kruse, F.A., Lefkoff, A.B., Boardman, J.W., Heidebrecht, K.B., Shapiro, A.T., Barloon, P.J., et al.: The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 44(2–3), 145–163 (1993)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.School of Civil Engineering and Department of AgronomyPurdue UniversityWest LafayetteUSA
  2. 2.State Key Laboratory for Multi-spectral Information Processing TechnologiesHuazhong University of Science and TechnologyWuhanChina
  3. 3.Department of Civil EngineeringPurdue UniversityWest LafayetteUSA

Personalised recommendations