A Theoretical Comparison of Two Linear Dimensionality Reduction Techniques

  • Luis Rueda
  • Myriam Herrera
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4225)


A theoretical analysis for comparing two linear dimensionality reduction (LDR) techniques, namely Fisher’s discriminant (FD) and Loog-Duin (LD) dimensionality reduciton, is presented. The necessary and sufficient conditions for which FD and LD provide the same linear transformation are discussed and proved. To derive these conditions, it is first shown that the two criteria preserve the same maximum value after a diagonalization process is applied, and then the necessary and sufficient conditions for various cases, including coincident covariance matrices, coincident prior probabilities, and for when one of the covariances is the identity matrix. A measure for comparing the two criteria is derived from the necessary and sufficient conditions, and used to empirically show that the conditions are statistically related to the classification error for a post-processing quadratic classifier and the Chernoff distance in the transformed space.


Machine Intelligence Back Propagation Neural Network Perspectral Image Theoretical Comparison Principal Component Analy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Aladjem, M.: Linear Discriminant Analysis for Two Classes Via Removal of Classification Structure. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(2), 187–192 (1997)CrossRefGoogle Scholar
  2. 2.
    Ali, M., Rueda, L., Herrera, M.: On the Performance of Chernoff-distance-Based Linear Dimensionality Reduction Techniques. In: Lamontagne, L., Marchand, M. (eds.) Canadian AI 2006. LNCS (LNAI), vol. 4013, pp. 469–480. Springer, Heidelberg (2006)Google Scholar
  3. 3.
    Cooke, T.: Two Variations on Fisher’s Linear Discriminant for Pattern Recognition. IEEE Transations on Pattern Analysis and Machine Intelligence 24(2), 268–273 (2002)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Du, Q., Chang, C.: A Linear Constrained Distance-based Discriminant Analysis for Hyperspectral Image Classification. Pattern Recognition 34(2), 361–373 (2001)MATHCrossRefGoogle Scholar
  5. 5.
    Duda, R., Hart, P., Stork, D.: Pattern Classification, 2nd edn. John Wiley and Sons, Inc., New York (2000)Google Scholar
  6. 6.
    Lehmann, E., D’Abrera, H.: Nonparametrics: Statistical Methods Based on Ranks. Prentice-Hall, Englewood Cliffs (1998)Google Scholar
  7. 7.
    Lippman, R.: An Introduction to Computing with Neural Nets. In: Neural Networks: Theoretical Foundations and Analsyis, pp. 5–24. IEEE Computer Society Press, Los Alamitos (1992)Google Scholar
  8. 8.
    Loog, M., Duin, P.W.: Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(6), 732–739 (2004)CrossRefGoogle Scholar
  9. 9.
    Loog, M., Duin, R.: Non-iterative Heteroscedastic Linear Dimension Reduction for Two-Class Data. In: Caelli, T.M., Amin, A., Duin, R.P.W., Kamel, M.S., de Ridder, D. (eds.) SPR 2002 and SSPR 2002. LNCS, vol. 2396, pp. 508–517. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Lotlikar, R., Kothari, R.: Adaptive Linear Dimensionality Reduction for Classification. Pattern Recognition 33(2), 185–194 (2000)CrossRefGoogle Scholar
  11. 11.
    Murphy, O.: Nearest Neighbor Pattern Classification Perceptrons. In: Neural Networks: Theoretical Foundations and Analysis, pp. 263–266. IEEE Press, Los Alamitos (1992)Google Scholar
  12. 12.
    Rao, A., Miller, D., Rose, K., Gersho, A.: A Deterministic Annealing Approach for Parsimonious Design of Piecewise Regression Models. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(2), 159–173 (1999)CrossRefGoogle Scholar
  13. 13.
    Raudys, S.: On Dimensionality, Sample Size, and Classification Error of Nonparametric Linear Classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(6), 667–671 (1997)CrossRefGoogle Scholar
  14. 14.
    Raudys, S.: Evolution and Generalization of a Single Neurone: I. Single-layer Perception as Seven Statistical Classifiers. Neural Networks 11(2), 283–296 (1998)CrossRefGoogle Scholar
  15. 15.
    Raudys, S.: Evolution and Generalization of a Single Neurone: II. Complexity of Statistical Classifiers and Sample Size Considerations. Neural Networks 11(2), 297–313 (1998)CrossRefGoogle Scholar
  16. 16.
    Rueda, L.: Selecting the Best Hyperplane in the Framework of Optimal Pairwise Linear Classifiers. Pattern Recognition Letters 25(2), 49–62 (2004)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Rueda, L., Herrera, M.: Necessary and Sufficient Conditions for the Equivalence of two Linear Dimensionality Reduction Techniques. (Submitted for Publication) (2006), Electronically available at
  18. 18.
    Rueda, L., Oommen, B.J.: On Optimal Pairwise Linear Classifiers for Normal Distributions: The Two-Dimensional Case. IEEE Transations on Pattern Analysis and Machine Intelligence 24(2), 274–280 (2002)CrossRefGoogle Scholar
  19. 19.
    Rueda, L., Oommen, B.J.: On Optimal Pairwise Linear Classifiers for Normal Distributions: The d-Dimensional Case. Pattern Recognition 36(1), 13–23 (2003)MATHCrossRefGoogle Scholar
  20. 20.
    Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Elsevier, Amsterdam (2006)MATHGoogle Scholar
  21. 21.
    Webb, A.: Statistical Pattern Recognition, 2nd edn. John Wiley, New York (2002)MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Luis Rueda
    • 1
  • Myriam Herrera
    • 2
  1. 1.Department of Computer Science and Center for BiotechnologyUniversity of ConcepciónConcepciónChile
  2. 2.Department and Institute of InformaticsNational University of San JuanSan JuanArgentina

Personalised recommendations