Mixtures of principal components Gaussians for density estimation in high dimension data spaces

  • Lemarié Bernard
Poster Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1451)


An approximation of the gaussian model for density estimation in high-dimension data spaces is presented. The work is mainly motived by the need for a numerically tractable model in high dimension data spaces. The characteristic of the model is to restrict to the Principal Component of each local covariance matrix with the advantage of still being a probabilistic model. The likelihood of the local density is studied and an iterative algorithm is next proposed so as to learn the model. This latter is an adaptation of the well-known iterative Generalized Hebbian Algorithm. A comparison is done with related works based on Factorial Analysis. First experiments on handwritten digits are also reported.


Density estimation Approximation by mixtures Handwritten recognition 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Dempster, A. P., Laird N. M. and Rubin, D. B. (1977), “Maximum Likelihood from incomplete data via the EM algorithm”, J. Royal Stat. Soc. B45(1), 47–50.Google Scholar
  2. [2]
    Fukunaga K., (1990), “statistical pattern recognition”, second edition, Academic Press.Google Scholar
  3. [3]
    Ghahramani Z., Hinton G. E. (1997), ”The EM algorithm for mixture of factorial analyzers”, University of Toronto CRG-TR-96-1. ftp://ftp.cs.toronto.edu/pub/zoubin/tr-96-l.ps.gzGoogle Scholar
  4. [4]
    Hinton G. E., Dayan P., Revow M., (1997), ”Modelling the Manifolds of Images of Handwritten Digits”, IEEE Transaction in neural networks, vol. 8, no 1, p65–74.CrossRefGoogle Scholar
  5. [5]
    Jordan M. I., Xu L. (1996), ”Convergences Results for the EM approach to Mixtures of Experts Architextures”, Neural Networks, Vol.8, no 9, 1409–1431.CrossRefGoogle Scholar
  6. [6]
    Kambhatla N., Leen T. K. (1995), “ Classifying with Gaussian Mixtures and clusters”, in Advances in Neural Information Processing Systems 7.Google Scholar
  7. [7]
    Lemarié B., Gilloux M. and Leroux M.(1996), ”Handwritten Word recognition using Contextual Hybrid RBF networks/Hidden Markov Models”, in Advances in Neural Information Processing Systems 8.Google Scholar
  8. [8]
    Lemarié B.,.(1998), ”Mélanges de gaussiennes en composantes principales pp our ('estimation de densité”, in Proceedings of RFIA, AFCET,France,1'Ih://ftp.srt-paste.frlpub/rmo/bernarcViiu98.ps.7..Google Scholar
  9. [9]
    Mao J., Jain A. K. (1995), Artificial neural networks for feature extraction and multivariate data projection”, IEE transactions in neural networks, vol 6, no 2, 1995.Google Scholar
  10. [10]
    Neal, R. N., Hinton, G. E. (1993), ”A new view of the EM algorithm that justifies Incremental and other variants”, Univeristy of Toronto, Dept. of computer Science, preprint.Google Scholar
  11. [11]
    Oja, E. (1982), “ A simplified neuron model as a principal component analyzer”, Journal of Mathematical Biology, 16, 267–273.CrossRefGoogle Scholar
  12. [12]
    Ormoneit D., Tresp V. (1996), ”Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Term and Network Averaging”, in Advances in Neural Information Processing Systems 8.Google Scholar
  13. [13]
    Rao, C. R., “ Estimation and tests of signifiance in factor analysis”, Psychometrika, 20, 93–11, 1955.Google Scholar
  14. [14]
    Sanger, T. D. (1989), “Optimal unsupervised learning in a single layer linear feedforward network”, Neural Networks, 2, 459–473.CrossRefGoogle Scholar
  15. [15]
    Tipping M. E., Bishop M. C, (06/1997), Mixtures of probabilistic principal components analyser”, Aston University NCRG Technical Report, http://neural-server.aston.ac.uk/cgi-binGoogle Scholar
  16. [16]
    Xu I., Jordan M. I., (1996), “on converge properties of the EM algorithm for gaussian mixtures”, neural computation, 8,129–151.Google Scholar
  17. [17]
    Xu, L. (1993), “Least Mean Square Error Reconstruction Principle for Self-Organizing Neural Nets”, Neural Networks, 6, 627–648.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Lemarié Bernard
    • 1
  1. 1.La Poste SRTP 10Nantes Cedex 02France

Personalised recommendations