A Hierarchical Multiclassifier System for Hyperspectral Data Analysis

  • Shailesh Kumar
  • Joydeep Ghosh
  • Melba Crawford
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1857)

Abstract

Many real world classification problems involve high dimensional inputs and a large number of classes. Feature extraction and modular learning approaches can be used to simplify such problems. In this paper, we introduce a hierarchical multiclassifier paradigm in which a C- class problem is recursively decomposed into C- 1 two-class problems. A generalized modular learning framework is used to partition a set of classes into two disjoint groups called meta-classes. The coupled problem of finding a good partition and of searching for a linear feature extractor that best discriminates the resulting two meta-classes are solved simultaneously at each stage of the recursive algorithm. This results in a binary tree whose leaf nodes represent the original C classes. The proposed hierarchical multiclassifier architecture was used to classify 12 types of landcover from 183-dimensional hyperspectral data. The classification accuracy was significantly improved by 4 to 10% relative to other feature extraction and modular learning approaches. Moreover, the class hierarchy that was automatically discovered conformed very well with a human domain expert–s opinion, which demonstrates the potential of such a modular learning approach for discovering domain knowledge automatically from data.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Roderick Murray-Smith and Tor Arne Johansen. Multiple Model Approaches to Modelling and Control. Taylor and Francis, UK, 1997.Google Scholar
  2. 2.
    A. Sharkey, editor. Combining Artificial Neural Nets. Springer-Verlag, 1999.Google Scholar
  3. 3.
    V. Haertel and D. Landgrebe. On the classification of classes with nearly equal spectral responses in remote sensing hyperspectral image data. IEEE Transactions on Geoscience and Remote Sensing, 37(5–2):2374–2386, September 1999.Google Scholar
  4. 4.
    C. Lee and D. A. Landgrebe. Decision boundary feature extraction for neural networks. IEEE Transactions on Neural Networks, 8(1):75–83, January 1997.Google Scholar
  5. 5.
    X. Jia and J. A. Richards. Segmented principal components transformation for efficient hyperspectral remote-sensing image display and classification. IEEE Transactions on Geoscience and Remote Sensing, 37(1):538–542, January 1999.Google Scholar
  6. 6.
    X. Jia. Classification techniques for hyperspectral remote sensing image data. PhD thesis, Univ. College, ADFA, University of New South Wales, Australia, 1996.Google Scholar
  7. 7.
    S. Kumar, J. Ghosh, and M. M. Crawford. Classification of hyperspectral data using best-bases feature extraction algorithms. In Proc. of SPIE: Applications and Science of Computational Intelligence III, Orlando, April 2000.Google Scholar
  8. 8.
    N. Saito and Ronald R. Coifman. Local discriminant bases. In Mathematical Imaging: Wavelet Applications in Signal and Image Processing II, Proc. of SPIE, volume 2303, pages 2–14, 1994.Google Scholar
  9. 9.
    S. Kumar, M. M. Crawford, and J. Ghosh. A versatile framework for labeling imagery with large number of classes. In Proceedings of the International Joint Conference on Neural Networks, Washington, D.C., 1999.Google Scholar
  10. 10.
    M. M. Crawford, S. Kumar, M.R. Ricard, J.C. Gibeaut, and A. Neuenshwander. Fusion of airborne polarimetric and interferometric SAR for classification of coastal environments. IEEE Transactions on Geoscience and Remote Sensing, 37(3):1306–1315, May 1999.Google Scholar
  11. 11.
    M.I. Jordan and R.A. Jacobs. Hierarchical mixture of experts and the EM algorithm. Neural Computation, 6:181–214, 1994.CrossRefGoogle Scholar
  12. 12.
    Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on Machine Learning, pages 148–156. Morgan Kaufmann, 1996.Google Scholar
  13. 13.
    L. Breiman. Bagging predictors. Machine Learning, 24(2):123–40, 1996.MATHMathSciNetGoogle Scholar
  14. 14.
    S. Kumar and J. Ghosh. GAMLS: A generalized framework for associative modular learning systems (invited paper). In Proceedings of the Applications and Science of Computational Intelligence II, pages 24–34, Orlando, Florida, 1999.Google Scholar
  15. 15.
    R. A. Fisher. The use of multiple measurements in taxonomic problems. Annal of Eugenics, 7:179–188, 1936.Google Scholar
  16. 16.
    A. A. Green, M. Berman, P. Switzer, and M. Craig. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Transactions on Geoscience and Remote Sensing, 26(1):65–74, 1988.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Shailesh Kumar
    • 1
  • Joydeep Ghosh
    • 1
  • Melba Crawford
    • 2
  1. 1.Laboratory of Artificial Neural Systems, Department of Electrical and Computer EngineeringThe University of Texas at AustinAustinUSA
  2. 2.Center for Space ResearchThe University of Texas at AustinAustinUSA

Personalised recommendations