Density Boosting for Gaussian Mixtures

  • Xubo Song
  • Kun Yang
  • Misha Pavel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3316)

Abstract

Ensemble method is one of the most important recent developments in supervised learning domain. Performance advantage has been demonstrated on problems from a wide variety of applications. By contrast, efforts to apply ensemble method to unsupervised domain have been relatively limited. This paper addresses the problem of applying ensemble method to unsupervised learning, specifically, the task of density estimation. We extend the work by Rosset and Segal [3] and apply the boosting method, which has its root as a gradient descent algorithm, to the estimation of densities modeled by Gaussian mixtures. The algorithm is tested on both artificial and real world datasets, and is found to be superior to non-ensemble approaches. The method is also shown to outperform the alternative bagging algorithm.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Smyth, P., Wolpert, D.: An evaluation of Llinearly combining density estimators via stacking. Machine Learning 36(1/2), 53–89 (1999)Google Scholar
  2. 2.
    Wolpert, D.: Stacked Generalization. Neural Networks 5(2), 241–260Google Scholar
  3. 3.
    Rosset, S., Segal, E.: Boosting density estimation. Advances in Neural Information Processing 15 (2002)Google Scholar
  4. 4.
    Mason, L., Baxter, J., Bartlett, P., Frean, P.: Boosting algorithms as gradient descent in function space. Advances in Neural Information Processing 12, 512–518 (1999)Google Scholar
  5. 5.
    Ormoneit, D., Tresp, V.: Improved Gaussian mixture density estimates using Bayesian penalty terms and network averaging. Advances in Neural Information Processing 8, 542–548 (1996)Google Scholar
  6. 6.
    Jordan, M., Jacobs, R.: Hierarchical mixtures of experts and the EM algoriths. Neural Computation 6, 181–214 (1994)CrossRefGoogle Scholar
  7. 7.
    Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from imcomplete data via the EM algorithm. J. Royal Statistical society B (1989)Google Scholar
  8. 8.
    Silverman, B.W.: Denstity Estimation for Statistics and Data Analysis. Chapman and Hall, NY (1986)Google Scholar
  9. 9.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  10. 10.
    Breiman, L.: Prediction games and arcing algorithms. Technical Report 504, Department of Statistics, University of California, Berkeley (1998)Google Scholar
  11. 11.
    Efron, B., Tibshirani, R.: An Introduction to the Boostrap. Chapman & Hall, Boca Raton (1994)Google Scholar
  12. 12.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 38(2), 337–374 (2000)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, July 3-6, pp. 148–156 (1996)Google Scholar
  14. 14.
    Shapire, R., Freund, Y., Bartlett, P., Lee, W.: Boosting the margin: a new explanation for the effetiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)CrossRefMathSciNetGoogle Scholar
  15. 15.
    Shapire, R.E.: The boosting approach to machine learning: An overview, MSRI Workshop on Nonlinear Estimation and Classification (2002)Google Scholar
  16. 16.
    Freund, Y., Shapire, R.E.: A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14(5), 771–780 (1999)Google Scholar
  17. 17.
    Freund, Y., Shapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Shapire, R., Stone, P., McAllester, D., Littman, M., Csirik, J.: Modeling auction price uncertainty using boosting-based conditional density estimation. In: Machine Learning: Proceed- ings of the Nineteenth International Conference (2002)Google Scholar
  19. 19.
    Meir, R.: Bias, variance and the combination of estimators; the case of linear least squares. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7 (1995)Google Scholar
  20. 20.
    Zemel, R., Pitassi, T.: A gradient-based algoriths for regression problems, Advances in Neural Information Processing Systems, NIPS (2001)Google Scholar
  21. 21.
    Moody, J., Utans, J.: Architecture Selection Strategies for Neural Networks: Application to Corporate Bond Rating Prediction. In: Refenes, A.N. (ed.) Neural Networks in the Capi- tal Markets. John Wiley & Sons, Chichester (1994)Google Scholar
  22. 22.
    Schapire, R., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. In: Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pp. 80–91 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Xubo Song
    • 1
  • Kun Yang
    • 1
  • Misha Pavel
    • 2
  1. 1.Department of Computer Science and Engineering, OGI School of Science and EngineeringOregon Health and Science UniversityBeaverton
  2. 2.Department of Biomedical Engineering, OGI School of Science and EngineeringOregon Health and Science UniversityBeaverton

Personalised recommendations