Semi-Supervised Local Fisher Discriminant Analysis for Dimensionality Reduction

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5012)


When only a small number of labeled samples are available, supervised dimensionality reduction methods tend to perform poorly due to overfitting. In such cases, unlabeled samples could be useful in improving the performance. In this paper, we propose a semi-supervised dimensionality reduction method which preserves the global structure of unlabeled samples in addition to separating labeled samples in different classes from each other. The proposed method has an analytic form of the globally optimal solution and it can be computed based on eigendecompositions. Therefore, the proposed method is computationally reliable and efficient. We show the effectiveness of the proposed method through extensive simulations with benchmark data sets.


Dimensionality Reduction Label Sample Dimensionality Reduction Method Unlabeled Sample Fisher Discriminant Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, Inc., Boston (1990)zbMATHGoogle Scholar
  2. 2.
    Sugiyama, M.: Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. Journal of Machine Learning Research 8, 1027–1061 (2007)Google Scholar
  3. 3.
    Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)Google Scholar
  4. 4.
    He, X., Niyogi, P.: Locality preserving projections. In: NIPS 16, MIT Press, Cambridge (2004)Google Scholar
  5. 5.
    Zelnik-Manor, L., Perona, P.: Self-tuning spectral clustering. In: NIPS 17, pp. 1601–1608. MIT Press, Cambridge (2005)Google Scholar
  6. 6.
    Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for adaboost. Machine Learning 42(3), 287–320 (2001)CrossRefzbMATHGoogle Scholar
  7. 7.
    Kashima, H., Koyanagi, T.: Kernels for semi-structured data. In: Proceedings of ICML 2002, pp. 291–298. Morgan Kaufmann, San Francisco (2002)Google Scholar
  8. 8.
    Sugiyama, M., Krauledat, M., Müller, K.R.: Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research 8, 985–1005 (2007)Google Scholar
  9. 9.
    Zhang, D., Zhou, Z.H., Chen, S.: Semi-supervised dimensionality reduction. In: Proceedings of SDM 2007, Minneapolis, MN, USA, pp. 629–634 (2008)Google Scholar
  10. 10.
    Cai, D., He, X., Han, J.: Semi-supervised discriminant analysis. In: Proceedings of ICCV 2007, Rio de Janeiro, Brazil (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  1. 1.Tokyo Institute of TechnologyTokyoJapan
  2. 2.IBM ResearchKanagawaJapan
  3. 3.Nikon CorporationSaitamaJapan
  4. 4.Ochanomizu UniversityTokyoJapan

Personalised recommendations