Advertisement

Dimensionality Reduction by Canonical Contextual Correlation Projections

  • Marco Loog
  • Bram van Ginneken
  • Robert P. W. Duin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3021)

Abstract

A linear, discriminative, supervised technique for reducing feature vectors extracted from image data to a lower-dimensional representation is proposed. It is derived from classical Fisher linear discriminant analysis (LDA) and useful, for example, in supervised segmentation tasks in which high-dimensional feature vector describes the local structure of the image. In general, the main idea of the technique is applicable in discriminative and statistical modelling that involves contextual data.

LDA is a basic, well-known and useful technique in many applications. Our contribution is that we extend the use of LDA to cases where there is dependency between the output variables, i.e., the class labels, and not only between the input variables. The latter can be dealt with in standard LDA.

The principal idea is that where standard LDA merely takes into account a single class label for every feature vector, the new technique incorporates class labels of its neighborhood in its analysis as well. In this way, the spatial class label configuration in the vicinity of every feature vector is accounted for, resulting in a technique suitable for e.g. image data. This spatial LDA is derived from a formulation of standard LDA in terms of canonical correlation analysis. The linearly dimension reduction transformation thus obtained is called the canonical contextual correlation projection.

An additional drawback of LDA is that it cannot extract more features than the number of classes minus one. In the two-class case this means that only a reduction to one dimension is possible. Our contextual LDA approach can avoid such extreme deterioration of the classification space and retain more than one dimension.

The technique is exemplified on a pixel-based segmentation problem. An illustrative experiment on a medical image segmentation task shows the performance improvements possible employing the canonical contextual correlation projection.

Keywords

Feature Vector Dimensionality Reduction Linear Discriminant Analysis Class Label Canonical Correlation Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Intelligence 19(7), 711–720 (1997)CrossRefGoogle Scholar
  2. 2.
    Borga, M.: Learning Multidimensional Signal Processing. Ph.D. Thesis, Linköping University, Sweden (1998)Google Scholar
  3. 3.
    Devijver, P.A., Kittler, J.: Pattern Recognition: a Statistical Approach. Prentice-Hall, London (1982)zbMATHGoogle Scholar
  4. 4.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2001)zbMATHGoogle Scholar
  5. 5.
    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188 (1936)Google Scholar
  6. 6.
    Fisher, R.A.: The statistical utilization of multiple measurements. Annals of Eugenics 8, 376–386 (1938)Google Scholar
  7. 7.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press, New York (1990)zbMATHGoogle Scholar
  8. 8.
    van Ginneken, B., ter Haar Romeny, B.M., Viergever, M.A.: Computer-aided diagnosis in chest radiography: A survey. IEEE Transactions on Medical Imaging 20(12), 1228–1241 (2001)CrossRefGoogle Scholar
  9. 9.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics. Springer, NewYork (2001)zbMATHGoogle Scholar
  10. 10.
    Hjort, N.L., Mohn, E.: A comparison in some contextual methods in remote sensing classification. In: Proceedings of the 18th International Symposium on Remote Sensing of Environment, Paris, France, CNES, pp. 1693–1702 (1984)Google Scholar
  11. 11.
    Hotelling, H.: Relations between two sets of variates. Biometrika 28, 321–377 (1936)zbMATHGoogle Scholar
  12. 12.
    Jain, K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 4–37 (2000)CrossRefGoogle Scholar
  13. 13.
    Liu, K., Cheng, Y.-Q., Yang, J.-Y.: Algebraic feature extraction for image recognition based on an optimal discriminant criterion. Pattern Recognition 26(6), 903–911 (1993)CrossRefGoogle Scholar
  14. 14.
    Rao, C.R.: The utilization of multiple measurements in problems of biological classification. Journal of the Royal Statistical Society. Series B 10, 159–203 (1948)zbMATHGoogle Scholar
  15. 15.
    Richards, J.A., Landgrebe, D.A., Swain, P.H.: Pixel labeling by supervised probabilistic relaxation. IEEE Transactions on Pattern Analysis and Machine Intelligence 3(2), 188–191 (1981)CrossRefGoogle Scholar
  16. 16.
    Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996)zbMATHGoogle Scholar
  17. 17.
    Winkler, G.: Image Analysis, Random Fields and Dynamic Monte Carlo Methods. Applications of mathematics, vol. 27. Springer, Heidelberg (1995)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Marco Loog
    • 1
  • Bram van Ginneken
    • 1
  • Robert P. W. Duin
    • 2
  1. 1.Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
  2. 2.Information and Communication Theory GroupDelft University of TechnologyDelftThe Netherlands

Personalised recommendations