Improving Hyperspectral Classifiers: The Difference Between Reducing Data Dimensionality and Reducing Classifier Parameter Complexity
Hyperspectral data is usually high dimensional, and there is often a scarcity of available ground truth pixels . Thus the task of applying even a simple classifier such as the Gaussian Maximum Likelihood (GML) classifier usually forces the analyst to reduce the complexity of the implicit parameter estimation task. For decades, the common perception in the literature has been that the solution to this has been to reduce data dimensionality. However, as can be seen from a result by Cover , reducing dimensionality increases the risk of making the classification problem more complex.Using the simple GML classifier we compare state of the art dimensionality reduction strategies with a recently proposed strategy for sparsing of parameter estimates in full dimension . Results show that reducing parameter estimation complexity by fitting sparse models in full dimension have a slight edge on the common approaches.
KeywordsDimensionality Reduction Hyperspectral Image Hyperspectral Data Sparse Model Full Dimension
- 2.Berge, A., Jensen, A.C., Solberg, A.S.: Sparse inverse covariance estimates for hyperspectral image classification. IEEE Trans. Geosci. Remote Sensing, Accepted for publication (2007)Google Scholar
- 4.Pouhramadi, M.: Foundations of Time Series Analysis and Prediction Theory. Wiley, Chichester (2001)Google Scholar
- 6.Kuo, B.C., Landgrebe, D.: A robust classification procedure based on mixture classifiers and nonparametric weighted feature extraction. Remote Sensing 40(11), 2486–2494 (2002)Google Scholar
- 7.Gamba, P.: A collection of data for urban area characterization. In: Proc. IEEE Geoscience and Remote Sensing Symposium (IGARSS’04) (2004)Google Scholar