Coordinating Principal Component Analyzers

  • Jakob J. Verbeek
  • Nikos Vlassis
  • Ben Kröse
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2415)


Mixtures of Principal Component Analyzers can be used to model high dimensional data that lie on or near a low dimensional manifold. By linearly mapping the PCA subspaces to one global low dimensional space, we obtain a ‘global’ low dimensional coordinate system for the data. As shown by Roweis et al., ensuring consistent global low-dimensional coordinates for the data can be expressed as a penalized likelihood optimization problem. We show that a restricted form of the Mixtures of Probabilistic PCA model allows for a more efficient algorithm. Experimental results are provided to illustrate the viability method.


Density Model Informatics Institute Gaussian Density Feature Extraction Technique Probabilistic Principal Component Analyzer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    T.F. Cox and M.A.A. Cox. Multidimensional Scaling. Number 59 in Monographs on statistics and applied probability. Chapman & Hall, 1994.Google Scholar
  2. 2.
    Z. Ghahramani and G.E. Hinton. The EM Algorithm for Mixtures of Factor Analyzers. Technical Report CRG-TR-96-1, University of Toronto, Canada, 1996.Google Scholar
  3. 3.
    T. Kohonen. Self-Organizing Maps. Springer Series in Information Sciences. Springer-Verlag, Heidelberg, Germany, 2001.zbMATHGoogle Scholar
  4. 4.
    R.M. Neal and G.E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M.I. Jorda, editor, Learning in Graphical Models, pages 355–368. Kluwer Academic Publishers, Dordrecht, The Netherlands, 1998.Google Scholar
  5. 5.
    S.T. Roweis, L.K. Saul, and G.E. Hinton. Global coordination of local linear models. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT Press, 2002.Google Scholar
  6. 6.
    J.B. Tenenbaum, V. de Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.CrossRefGoogle Scholar
  7. 7.
    M.E. Tipping and C.M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443–482, 1999.CrossRefGoogle Scholar
  8. 8.
    J.J. Verbeek, N. Vlassis, and B. Kröse. The Generative Self-Organizing Map: A Probabilistic Generalization of Kohonen’s SOM. Technical Report IAS-UVA-02-03, Informatics Institute, University of Amsterdam, The Netherlands, May 2002.Google Scholar
  9. 9.
    J.J. Verbeek, N. Vlassis, and B. Kröse. Procrustes Analysis to Coordinate Mixtures of Probabilistic Principal Component Analyzers. Technical report, Informatics Institute, University of Amsterdam, The Netherlands, February 2002.Google Scholar
  10. 10.
    N. Vlassis, Y. Motomura, and B. Kröse. Supervised dimension reduction of intrinsically low-dimensional data.Neural Computation, 14(1):191–215, January 2002.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Jakob J. Verbeek
    • 1
  • Nikos Vlassis
    • 1
  • Ben Kröse
    • 1
  1. 1.Informatics InstituteUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations