Adaptive Feature Spaces for Land Cover Classification with Limited Ground Truth Data

  • Joseph T. Morgan
  • Alex Henneguelle
  • Melba M. Crawford
  • Joydeep Ghosh
  • Amy Neuenschwander
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2364)

Abstract

Classification of hyperspectral data is challenging because of high dimensionality (O(100)) inputs, several possible output classes with uneven priors, and scarcity of labeled information. In an earlier work, a multiclassifier system arranged as a binary hierarchy was developed to group classes for easier, progressive discrimination [27]. This paper substantially expands the scope of such a system by integrating a feature reduction scheme that adaptively adjusts to the amount of labeled data available, while exploiting the highly correlated nature of certain adjacent hyperspectral bands. The resulting best-basis binary hierarchical classifier (BB-BHC) family is thus able to address the “small sample size” problem, as evidenced by our experimental results.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    T.W. Anderson, An Introduction to Multivariate Statistical Analysis. New York: John Wiley & Sons, 1984.MATHGoogle Scholar
  2. 2.
    D. Landgrebe, “Information extraction principles and methods for multispectral and hyperspectral image data,” Information Processing for Remote Sensing, ed. Chen, C.H., World Scientific Pub. Co, NJ, 1999.Google Scholar
  3. 3.
    S. Tadjudin and D.A. Landgrebe, “Robust parameter estimation for mixture model,” IEEE Trans. Geosci.Rem. Sens. 38(1): 439–45, 2000.CrossRefGoogle Scholar
  4. 4.
    S. Kumar, J. Ghosh and M. M. Crawford, “Hierarchical fusion of multiple classifiers for hyperspectral data analysis,” Pattern Analysis and Applications, Special Issue on Classifier Fusion (to appear).Google Scholar
  5. 5.
    S. Tadjudin and D.A. Landgrebe, “Covariance estimation with limited training samples,” IEEE Trans.Geosci.Rem. Sens., 37(4): 2113–8, 1999.CrossRefGoogle Scholar
  6. 6.
    P.A. Devijver and J. Kittler (editors), Pattern Recognition Theory and Application. Springer-Verlag, 1987.Google Scholar
  7. 7.
    S. Kumar, J. Ghosh, and M.M. Crawford, “Best basis feature exaction algorithms for classification of hyperspectral data,” IEEE Trans. Geosci.Rem. Sens., 39(7): 1368–79, 2001.CrossRefGoogle Scholar
  8. 8.
    S.J. Raudys and A.K. Jain, “Small sample size effects in statistical pattern recognition: recommendations for practitioners”, IEEE Trans on PAMI, 13(3): 252–64, 1991.Google Scholar
  9. 9.
    Qiong Jackson and David Landgrebe, “An adaptive classifier design for high-dimensional data analysis with a limited training data set”, IEEE Trans. Geosci.Rem. Sens, 39(12): 2664–79, 2001.CrossRefGoogle Scholar
  10. 10.
    T.W. Anderson, An Introduction to Multivariate Statistical Analysis. New York: John Wiley & Sons, 1984.MATHGoogle Scholar
  11. 11.
    Andrew Webb, Statistical pattern recognition. London: Oxford University Press, 1999.MATHGoogle Scholar
  12. 12.
    Marina Skurichina, “Stabilizing weak classifiers,” Thesis, Vilnius State University, 2001.Google Scholar
  13. 13.
    L. Breiman, “Bagging predictors,” Machine Learning, 24(2): 123–40, 1996.MATHMathSciNetGoogle Scholar
  14. 14.
    A. McCallum, R. Rosenfeld, T. Mitchell, and A.Y. Ng, “Improving text classification by shrinkage in a hierarchy of classes,” Proc. 15th International Conf. on Machine, Madison, WI, Morgan Kaufmann, San Mateo, CA, 359–67 1998.Google Scholar
  15. 15.
    K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd Ed, Boston, 1990.Google Scholar
  16. 16.
    Sarunas Raudys and Robert P. W. Duin, “Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix,” Pattern Recognition Letters, 19: 385–92, 1998.MATHCrossRefGoogle Scholar
  17. 17.
    M. Skurichina and R.P.W. Duin, “Stabilizing classifiers for very small sample sizes”, Proc. 13th Int. Conf. on Pattern Recognition (Vienna, Austria, Aug.25–29) Vol. 2, Track B: Pattern Recognition and Signal Analysis, IEEE Computer Society Press, Los Alamitos, 891–6, 1996.CrossRefGoogle Scholar
  18. 18.
    A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” Proc. 11 th Annual Conf. Computational Learning Theory, 92–100, 1998.Google Scholar
  19. 19.
    Webpage. Jet Propulsion Lab, California Institute of Technology, http://makalu.jpl.nasa.gov/.
  20. 20.
    B. Jeon and D. Landgrebe, “Partially supervised classification using weighted unsupervised clustering,” IEEE Trans. Geosci.Rem. Sens., 37(2): 1073–9, March 1999.Google Scholar
  21. 21.
    T.M. Mitchell, “The role of unlabeled data in supervised learning,” Proc. Sixth Intl. Colloquium on Cognitive Science, 8 pgs, 1999.Google Scholar
  22. 22.
    V.R. de Sa, “Learning classification with unlabeled data,” Advances in Neural Information Processing Systems 6, 1994.Google Scholar
  23. 23.
    B.M. Shahshahani and D.A. Landgrebe, “The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon,” IEEE Trans. Geosci.Rem. Sens., 32(5):1087–95, 1994.CrossRefGoogle Scholar
  24. 24.
    T. Cocks, R. Jenssen, A. Stewart, I. Wilson, and T. Shields, “The HyMap airborne hyperspectral sensor: the system, calibration and performance”, Proc. 1st EARSeL Workshop on Imaging Spectroscopy (M. Schaepman, D. Schläpfer, and K.I. Itten, Eds.), Zurich, EARSeL, Paris, 37–42, 6–8 October, 1998.Google Scholar
  25. 25.
    X. Jia, Classification Techniques for Hyperspectral Remote Sensing Image Data. PhD Thesis, Univ. College, ADFA, University of New South Wales, Australia, 1996.Google Scholar
  26. 26.
    X. Jia and J.A. Richards, “Segmented principal components transformation for efficient hyperspectral remote-sensing image display and classification”, IEEE Trans. Geosci.Rem. Sens., 37(1): 538–42, 1999.CrossRefGoogle Scholar
  27. 27.
    S. Kumar, J. Ghosh, and M. M. Crawford, “A hierarchical multiclassifier system for hyperspectral data analysis”, 1st Intl.Workshop on Multiple Classifier Systems, Sardinia, Italy, 270–9, June 2000.Google Scholar
  28. 28.
    David Landgrebe, “Hyperspectral image data analysis as a high dimensional signal processing problem,” (Invited), Special Issue of the IEEE Signal Processing Magazine, 19(1), 17–28, 2002.CrossRefGoogle Scholar
  29. 29.
    K. Turner and J. Ghosh, “Error correlation and error reduction in ensemble classifiers” Connection Science, Special Issue on Combining, 8(3/4), 385–404, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Joseph T. Morgan
    • 1
  • Alex Henneguelle
    • 2
  • Melba M. Crawford
    • 1
  • Joydeep Ghosh
    • 2
  • Amy Neuenschwander
    • 1
  1. 1.Center for Space ResearchUSA
  2. 2.Department of Electrical and Computer EngineeringThe University of Texas at AustinUSA

Personalised recommendations