A Unified Framework for Probabilistic Component Analysis

  • Mihalis A. Nicolaou
  • Stefanos Zafeiriou
  • Maja Pantic
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8725)


We present a unifying framework which reduces the construction of probabilistic component analysis techniques to a mere selection of the latent neighbourhood, thus providing an elegant and principled framework for creating novel component analysis models as well as constructing probabilistic equivalents of deterministic component analysis methods. Under our framework, we unify many very popular and well-studied component analysis algorithms, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), some of which have no probabilistic equivalents in literature thus far. We firstly define the Markov Random Fields (MRFs) which encapsulate the latent connectivity of the aforementioned component analysis techniques; subsequently, we show that the projection directions produced by all PCA, LDA, LPP and SFA are also produced by the Maximum Likelihood (ML) solution of a single joint probability density function, composed by selecting one of the defined MRF priors while utilising a simple observation model. Furthermore, we propose novel Expectation Maximization (EM) algorithms, exploiting the proposed joint PDF, while we generalize the proposed methodologies to arbitrary connectivities via parametrizable MRF products. Theoretical analysis and experiments on both simulated and real world data show the usefulness of the proposed framework, by deriving methods which well outperform state-of-the-art equivalents.


Unifying Framework Probabilistic Methods Component Analysis Dimensionality Reduction Random Fields 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Akisato, K., Masashi, S., Hitoshi, S., Hirokazu, K.: Designing various multivariate analysis at will via generalized pairwise expression. JIP 6(1), 136–145 (2013)Google Scholar
  2. 2.
    Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE TPAMI 19(7), 711–720 (1997)CrossRefGoogle Scholar
  3. 3.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus (2006)Google Scholar
  4. 4.
    Borga, M., Landelius, T., Knutsson, H.: A unified approach to PCA, PLS, MLR and CCA (1997)Google Scholar
  5. 5.
    Celeux, G., Forbes, F., Peyrard, N.: EM procedures using mean field-like approximations for Markov model-based image segmentation. Pattern Recogn. 36(1), 131–144 (2003)CrossRefzbMATHGoogle Scholar
  6. 6.
    Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE TPAMI 23(6), 643–660 (2001)CrossRefGoogle Scholar
  7. 7.
    He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.: Face recognition using laplacianfaces. IEEE TPAMI 27(3), 328–340 (2005)CrossRefGoogle Scholar
  8. 8.
    Ioffe, S.: Probabilistic Linear Discriminant Analysis. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 531–542. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Klampfl, S., Maass, W.: Replacing supervised classification learning by slow feature analysis in spiking neural networks. In: Advances in NIPS, pp. 988–996 (2009)Google Scholar
  10. 10.
    Kokiopoulou, E., Chen, J., Saad, Y.: Trace optimization and eigenproblems in dimension reduction methods. Numer. Linear Algebra Appl. 18(3), 565–602 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Li, P., Fu, Y., Mohammed, U., Elder, J.H., Prince, S.J.: Probabilistic models for inference about identity. IEEE TPAMI 34(1), 144–157 (2012)CrossRefGoogle Scholar
  12. 12.
    Martinez, A.M.: The AR face database. CVC Technical Report 24 (1998)Google Scholar
  13. 13.
    Niyogi, X.: Locality preserving projections. In: NIPS 2003, vol. 16, p. 153 (2004)Google Scholar
  14. 14.
    Prince, S.J.D., Elder, J.H.: Probabilistic linear discriminant analysis for inferences about identity. In: ICCV (2007)Google Scholar
  15. 15.
    Qian, W., Titterington, D.: Estimation of parameters in hidden markov models. Phil. Trans. of the Royal Society of London. Series A: Physical and Engineering Sciences 337(1647), 407–428 (1991)CrossRefzbMATHGoogle Scholar
  16. 16.
    Roweis, S.: EM algorithms for PCA and SPCA. In: NIPS 1998, pp. 626–632 (1998)Google Scholar
  17. 17.
    Roweis, S., Ghahramani, Z.: A unifying review of linear gaussian models. Neural Comput. 11(2), 305–345 (1999)CrossRefGoogle Scholar
  18. 18.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  19. 19.
    Rue, H., Held, L.: Gaussian Markov random fields: Theory and applications. CRC Press (2004)Google Scholar
  20. 20.
    Sim, T., Baker, S., Bsat, M.: The CMU Pose, Illumination, and Expression Database. In: Proc. of the IEEE FG 2002 (2002)Google Scholar
  21. 21.
    Sun, L., Ji, S., Ye, J.: A least squares formulation for a class of generalized eigenvalue problems in machine learning. In: ICML 2009, pp. 977–984. ACM (2009)Google Scholar
  22. 22.
    Tipping, M.E., Bishop, C.M.: Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B 61, 611–622 (1999)CrossRefzbMATHMathSciNetGoogle Scholar
  23. 23.
    De la Torre, F.: A least-squares framework for component analysis. IEEE TPAMI 34(6), 1041–1055 (2012)CrossRefGoogle Scholar
  24. 24.
    Turner, R., Sahani, M.: A maximum-likelihood interpretation for slow feature analysis. Neural Computation 19(4), 1022–1038 (2007)CrossRefzbMATHMathSciNetGoogle Scholar
  25. 25.
    Wiskott, L., Sejnowski, T.: Slow feature analysis: Unsupervised learning of invariances. Neural Computation 14(4), 715–770 (2002)CrossRefzbMATHGoogle Scholar
  26. 26.
    Yan, S., et al.: Graph embedding and extensions: A general framework for dimensionality reduction. IEEE TPAMI 29(1), 40–51 (2007)CrossRefGoogle Scholar
  27. 27.
    Zhang, J.: The mean field theory in EM procedures for Markov random fields. IEEE Transactions on Signal Processing 40(10), 2570–2583 (1992)CrossRefzbMATHGoogle Scholar
  28. 28.
    Zhang, S., Sim, T.: Discriminant subspace analysis: A fukunaga-koontz approach. IEEE TPAMI 29(10), 1732–1745 (2007)CrossRefGoogle Scholar
  29. 29.
    Zhang, Y., Yeung, D.-Y.: Heteroscedastic probabilistic linear discriminant analysis with semi-supervised extension. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009, Part II. LNCS, vol. 5782, pp. 602–616. Springer, Heidelberg (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Mihalis A. Nicolaou
    • 1
  • Stefanos Zafeiriou
    • 1
  • Maja Pantic
    • 1
    • 2
  1. 1.Department of ComputingImperial CollegeLondonUK
  2. 2.EEMCSUniversity of TwenteNetherlands (NL)

Personalised recommendations