Some Experiments with Ensembles of Neural Networks for Classification of Hyperspectral Images

  • Carlos Hernández-Espinosa
  • Mercedes Fernández-Redondo
  • Joaquín Torres-Sospedra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3173)


A hyperspectral image is used in remote sensing to identify different type of coverts on the Earth surface. It is composed of pixels and each pixel consist of spectral bands of the electromagnetic reflected spectrum. Neural networks and ensemble techniques have been applied to remote sensing images with a low number of spectral band per pixel (less than 20). In this paper we apply different ensemble methods of Multilayer Feedforward networks to images of 224 spectral bands per pixel, where the classification problem is clearly different. We conclude that in general there is an improvement by the use of an ensemble. For databases with low number of classes and pixels the improvement is lower and similar for all ensemble methods. However, for databases with a high number of classes and pixels the improvement depends strongly on the ensemble method.


Support Vector Machine Spectral Band Hyperspectral Image Ensemble Method Multispectral Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blamire, P.A.: The Influence of Relative Image Sample Size in Training Artificial Neural Networks. International Journal of Remote Sensing 17, 223–230 (1996)CrossRefGoogle Scholar
  2. 2.
    Kumar, A.S., Basu, S.K., Majumdar, K.L.: Robust Classification of Multispectral Data Using Multiple Neural Networks and Fuzzy Integral. IEEE Trans. on Geoscience and Remote Sensing 35(3), 787–790 (1997)CrossRefGoogle Scholar
  3. 3.
    Slade, W.H., Miller, R.L., Ressom, H., Natarajan, P.: Ensemble Neural Network for Satellite- Derived Estimation of Chlorophyll. In: Proceeding of the International Joint Conference on Neural Networks, pp. 547–552 (2003)Google Scholar
  4. 4.
    Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Drucker, H., Cortes, C., Jackel, D., et al.: Boosting and Other Ensemble Methods. Neural Computation 6, 1289–1301 (1994)zbMATHCrossRefGoogle Scholar
  6. 6.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3 & 4), 385–404 (1996)CrossRefGoogle Scholar
  7. 7.
    Freund, Y., Schapire, R.: Experiments with a New Boosting Algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  8. 8.
    Rosen, B.: Ensemble Learning Using Decorrelated Neural Networks. Connection Science 8(3 & 4), 373–383 (1996)CrossRefGoogle Scholar
  9. 9.
    Gualtieri, J.A., Chettri, S.R., Cromp, R.F., Johnson, L.F.: Support Vector Machine Classifiers as Applied to AVIRIS Data. In: Summaries of the Eight JPL Airborne Science Workshop, pp. 1–11 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Carlos Hernández-Espinosa
    • 1
  • Mercedes Fernández-Redondo
    • 1
  • Joaquín Torres-Sospedra
    • 1
  1. 1.Dept. de Ingenierìa y Ciencia de los ComputadoresUniversidad Jaume ICastellonSpain

Personalised recommendations