Computational Statistics

, Volume 28, Issue 1, pp 127–138 | Cite as

Ensemble Gaussian mixture models for probability density estimation

  • Michael Glodek
  • Martin Schels
  • Friedhelm Schwenker
Original Paper

Abstract

Estimation of probability density functions (PDF) is a fundamental concept in statistics. This paper proposes an ensemble learning approach for density estimation using Gaussian mixture models (GMM). Ensemble learning is closely related to model averaging: While the standard model selection method determines the most suitable single GMM, the ensemble approach uses a subset of GMM which are combined in order to improve precision and stability of the estimated probability density function. The ensemble GMM is theoretically investigated and also numerical experiments were conducted to demonstrate benefits from the model. The results of these evaluations show promising results for classifications and the approximation of non-Gaussian PDF.

Keywords

Ensemble learning Density estimation Finite mixture models 

Notes

Acknowledgments

The presented work was developed within the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” funded by the German Research Foundation (DFG) and DFG project SCHW 623/4-2. The work of Martin Schels is supported by a scholarship of the Carl-Zeiss Foundation.

References

  1. Bishop CM (2006) Pattern recognition and machine learning. Springer, BerlinMATHGoogle Scholar
  2. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MathSciNetMATHGoogle Scholar
  3. Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B (Methodological) 39(1):1–38Google Scholar
  4. Dietterich TG (2000) Ensemble methods in machine learning. In: Kittler J and Roli F (eds) Proceedings of the international workshop on multiple classifier systems (MCS), vol 1857 of lecture notes in computer science (LNCS). Springer, pp 1–15Google Scholar
  5. Freund Y, Schapire E (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139MathSciNetMATHCrossRefGoogle Scholar
  6. Friedman JH, Stuetzle W, Schroeder A (1984) Projection pursuit density estimation. J Am Stat Assoc 79:599–608MathSciNetCrossRefGoogle Scholar
  7. Fukunaga K (1990) Introduction to statistical pattern recognition. Academic Press, New YorkMATHGoogle Scholar
  8. Hastie T, Tibshirani R, Friedman JH (2001) The elements of statistical learning: data mining, inference, and prediction. Springer, BerlinMATHGoogle Scholar
  9. Hwang JN, Lay SR, Lippman A (1994) Nonparametric multivariate density estimation: a comparative study. IEEE Trans Signal Process 42:2795–2810CrossRefGoogle Scholar
  10. Jones MC, Marron JS, Sheather SJ (1996) A brief survey of bandwidth selection for density estimation. J Am Stat Assoc 91:401–407Google Scholar
  11. Kim C, Kim S, Park M, Lee H (2006) A bias reducing technique in kernel distribution function estimation. Comput Stat 21:589–601MathSciNetMATHCrossRefGoogle Scholar
  12. Kraus J, Müssel C, Palm G, Kestler HA (2011) Multi-objective selection for collecting cluster alternatives. Comput Stat 26:341–353CrossRefGoogle Scholar
  13. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, LondonMATHCrossRefGoogle Scholar
  14. Maiboroda R, Markovich N (2004) Estimation of heavy-tailed probability density function with application to web data. Comput Stat 19:569–592MathSciNetMATHCrossRefGoogle Scholar
  15. Ormoneit D, Tresp V (1998) Averaging, maximum penalized likelihood and Bayesian estimation for improving Gaussian mixture probability density estimates. IEEE Trans Neural Netw 9(4):639–650Google Scholar
  16. Rabiner L, Juang B-H (1993) Fundamentals of speech recognition. Prentice Hall, Englewood CliffsGoogle Scholar
  17. Ripley D (1996) Pattern recognition and neural networks. Cambridge University Press, CambridgeMATHGoogle Scholar
  18. Scott W (1992) Multivariate density estimation: theory, practice, and visualization. Wiley, New YorkMATHCrossRefGoogle Scholar
  19. Shinozaki T, Kawahara T (2008) GMM and HMM training by aggregated EM algorithm with increased ensemble sizes for robust parameter estimation. In: Proceedings of the international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 4405–4408Google Scholar
  20. Silverman BW (1986) Density estimation for statistics and data analysis. Chapman and Hall, LondonMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Michael Glodek
    • 1
  • Martin Schels
    • 1
  • Friedhelm Schwenker
    • 1
  1. 1.Institute of Neural Information ProcessingUniversity of UlmUlmGermany

Personalised recommendations