Probabilistic Linear Discriminant Analysis

  • Sergey Ioffe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3954)


Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.


Linear Discriminant Analysis Gaussian Mixture Model Class Center Scatter Matrice Class Inference 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Belhumeur, P.N., Hespanha, J., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. PAMI 19(7), 711–720 (1997)CrossRefGoogle Scholar
  2. 2.
    Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: Proc. of IEEE CVPR, Seattle, WA (1994)Google Scholar
  3. 3.
    Hastie, T., Tibshirani, R.: Discriminant analysis by Gaussian mixtures. Journal of the Royal Statistical Society series B 58, 158–176 (1996)MathSciNetMATHGoogle Scholar
  4. 4.
    Bach, F., Jordan, M.: A probabilistic interpretation of canonical correlation analysis. Technical Report 688, Department of Statistics, UC Berkeley (2005)Google Scholar
  5. 5.
    Fei-Fei, L., Fergus, R., Perona, P.: A bayesian approach to unsupervised one-shot learning of object categories. In: ICCV (2003)Google Scholar
  6. 6.
    Tipping, M., Bishop, C.: Probabilistic principal component analysis. Technical Report NCRG/97/010, Neural Computing Research Group, Aston University (1997)Google Scholar
  7. 7.
    Bishop, C.: Neural networks for pattern recognition. Oxford University Press, Oxford (1995)MATHGoogle Scholar
  8. 8.
    Sim, T., Baker, S., Bsat, M.: The cmu pose, illumination, and expression (pie) database. In: Proc. IEEE International Conference on Automatic Face and Gesture Recognition (2002)Google Scholar
  9. 9.
    Phillips, P., Wechsler, H., Huang, J., Rauss, P.: The feret database and evaluation procedure for face recognition algorithms. IVC 16(5), 295–306 (1998)CrossRefGoogle Scholar
  10. 10.
    Nene, S., Nayar, S., Murase, H.: Columbia object image library: Coil. Technical Report CUCS-006-96, Department of CS, Columbia University (1996)Google Scholar
  11. 11.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)CrossRefGoogle Scholar
  12. 12.
    Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural Computation 12(6), 1247–1283 (2000)CrossRefGoogle Scholar
  13. 13.
    Mika, S., Ratsch, G., Weston, J., Scholkopf, B., Muller, K.: Fisher discriminant analysis with kernels. In: Proceedings of IEEE Neural Networks for Signal Processing Workshop (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Sergey Ioffe
    • 1
  1. 1.Fujifilm SoftwareSan JoseUSA

Personalised recommendations