Improving Hyperspectral Classifiers: The Difference Between Reducing Data Dimensionality and Reducing Classifier Parameter Complexity

  • Asbjørn Berge
  • Anne Schistad Solberg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4522)

Abstract

Hyperspectral data is usually high dimensional, and there is often a scarcity of available ground truth pixels . Thus the task of applying even a simple classifier such as the Gaussian Maximum Likelihood (GML) classifier usually forces the analyst to reduce the complexity of the implicit parameter estimation task. For decades, the common perception in the literature has been that the solution to this has been to reduce data dimensionality. However, as can be seen from a result by Cover [1], reducing dimensionality increases the risk of making the classification problem more complex.Using the simple GML classifier we compare state of the art dimensionality reduction strategies with a recently proposed strategy for sparsing of parameter estimates in full dimension [2]. Results show that reducing parameter estimation complexity by fitting sparse models in full dimension have a slight edge on the common approaches.

Keywords

Dimensionality Reduction Hyperspectral Image Hyperspectral Data Sparse Model Full Dimension 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Cover, T.M.: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers 14(3), 326–334 (1965)MATHCrossRefGoogle Scholar
  2. 2.
    Berge, A., Jensen, A.C., Solberg, A.S.: Sparse inverse covariance estimates for hyperspectral image classification. IEEE Trans. Geosci. Remote Sensing, Accepted for publication (2007)Google Scholar
  3. 3.
    Smith, M., Kohn, R.: Parsimonius covariance matrix estimation for longitudinal data. Journal of the American Statistical Association 97(460), 1141–1153 (2002)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Pouhramadi, M.: Foundations of Time Series Analysis and Prediction Theory. Wiley, Chichester (2001)Google Scholar
  5. 5.
    Lee, C., Landgrebe, D.: Feature extraction based on desicion boundaries. IEEE Trans. Pattern Anal. Machine Intell. 15(15), 388–400 (1993)CrossRefGoogle Scholar
  6. 6.
    Kuo, B.C., Landgrebe, D.: A robust classification procedure based on mixture classifiers and nonparametric weighted feature extraction. Remote Sensing 40(11), 2486–2494 (2002)Google Scholar
  7. 7.
    Gamba, P.: A collection of data for urban area characterization. In: Proc. IEEE Geoscience and Remote Sensing Symposium (IGARSS’04) (2004)Google Scholar
  8. 8.
    Ham, J., Chen, Y., Crawford, M.M., Ghosh, J.: Investiagtion of the random forest framework for classification of hyperspectral data. Remote Sensing 43(3), 492–501 (2005)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Asbjørn Berge
    • 1
  • Anne Schistad Solberg
    • 1
  1. 1.Department of Informatics, University of OsloNorway

Personalised recommendations