Skip to main content

Density Boosting for Gaussian Mixtures

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3316))

Abstract

Ensemble method is one of the most important recent developments in supervised learning domain. Performance advantage has been demonstrated on problems from a wide variety of applications. By contrast, efforts to apply ensemble method to unsupervised domain have been relatively limited. This paper addresses the problem of applying ensemble method to unsupervised learning, specifically, the task of density estimation. We extend the work by Rosset and Segal [3] and apply the boosting method, which has its root as a gradient descent algorithm, to the estimation of densities modeled by Gaussian mixtures. The algorithm is tested on both artificial and real world datasets, and is found to be superior to non-ensemble approaches. The method is also shown to outperform the alternative bagging algorithm.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Smyth, P., Wolpert, D.: An evaluation of Llinearly combining density estimators via stacking. Machine Learning 36(1/2), 53–89 (1999)

    Google Scholar 

  2. Wolpert, D.: Stacked Generalization. Neural Networks 5(2), 241–260

    Google Scholar 

  3. Rosset, S., Segal, E.: Boosting density estimation. Advances in Neural Information Processing 15 (2002)

    Google Scholar 

  4. Mason, L., Baxter, J., Bartlett, P., Frean, P.: Boosting algorithms as gradient descent in function space. Advances in Neural Information Processing 12, 512–518 (1999)

    Google Scholar 

  5. Ormoneit, D., Tresp, V.: Improved Gaussian mixture density estimates using Bayesian penalty terms and network averaging. Advances in Neural Information Processing 8, 542–548 (1996)

    Google Scholar 

  6. Jordan, M., Jacobs, R.: Hierarchical mixtures of experts and the EM algoriths. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  7. Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from imcomplete data via the EM algorithm. J. Royal Statistical society B (1989)

    Google Scholar 

  8. Silverman, B.W.: Denstity Estimation for Statistics and Data Analysis. Chapman and Hall, NY (1986)

    Google Scholar 

  9. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  10. Breiman, L.: Prediction games and arcing algorithms. Technical Report 504, Department of Statistics, University of California, Berkeley (1998)

    Google Scholar 

  11. Efron, B., Tibshirani, R.: An Introduction to the Boostrap. Chapman & Hall, Boca Raton (1994)

    Google Scholar 

  12. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 38(2), 337–374 (2000)

    Article  MathSciNet  Google Scholar 

  13. Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, July 3-6, pp. 148–156 (1996)

    Google Scholar 

  14. Shapire, R., Freund, Y., Bartlett, P., Lee, W.: Boosting the margin: a new explanation for the effetiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)

    Article  MathSciNet  Google Scholar 

  15. Shapire, R.E.: The boosting approach to machine learning: An overview, MSRI Workshop on Nonlinear Estimation and Classification (2002)

    Google Scholar 

  16. Freund, Y., Shapire, R.E.: A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14(5), 771–780 (1999)

    Google Scholar 

  17. Freund, Y., Shapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  18. Shapire, R., Stone, P., McAllester, D., Littman, M., Csirik, J.: Modeling auction price uncertainty using boosting-based conditional density estimation. In: Machine Learning: Proceed- ings of the Nineteenth International Conference (2002)

    Google Scholar 

  19. Meir, R.: Bias, variance and the combination of estimators; the case of linear least squares. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7 (1995)

    Google Scholar 

  20. Zemel, R., Pitassi, T.: A gradient-based algoriths for regression problems, Advances in Neural Information Processing Systems, NIPS (2001)

    Google Scholar 

  21. Moody, J., Utans, J.: Architecture Selection Strategies for Neural Networks: Application to Corporate Bond Rating Prediction. In: Refenes, A.N. (ed.) Neural Networks in the Capi- tal Markets. John Wiley & Sons, Chichester (1994)

    Google Scholar 

  22. Schapire, R., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. In: Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pp. 80–91 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Song, X., Yang, K., Pavel, M. (2004). Density Boosting for Gaussian Mixtures. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds) Neural Information Processing. ICONIP 2004. Lecture Notes in Computer Science, vol 3316. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30499-9_78

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30499-9_78

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23931-4

  • Online ISBN: 978-3-540-30499-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics