Skip to main content

Ensemble Learning

  • Chapter

Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

This chapter gives a tutorial introduction to ensemble learning, a recently developed Bayesian method. For many problems it is intractable to perform inferences using the true posterior density over the unknown variables. Ensemble Learning allows the true posterior to be approximated by a simpler approximate distribution for which the required inferences are tractable. When we say we are making a model of a system, we are setting up a tool which can be used to make inferences, predictions and decisions. Each model can be seen as a hypothesis, or explanation, which makes assertions about the quantities which are directly observable and those which can only be inferred from their effect on observable quantities.

Keywords

  • Cost Function
  • Posterior Distribution
  • Posterior Density
  • Stochastic Approximation
  • Ensemble Learn

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-1-4471-0443-8_5
  • Chapter length: 18 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   169.00
Price excludes VAT (USA)
  • ISBN: 978-1-4471-0443-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   219.99
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the COLT’93, pp. 5–13, Santa Cruz, California, 1993.

    Google Scholar 

  2. S. Kullback and R. A. Leible. On information and sufficiency. The Annals of Mathematical Statistics 22: 79–86, 1951.

    MATH  CrossRef  Google Scholar 

  3. R M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics No. 118. Springer-Verlag, 1996.

    Google Scholar 

  4. R M. Neal and G E. Hinton. A view of the EM algorithm that justifies incremental, sparse and other variants. In Learning in Graphical Models. M. I. Jordan, editor, 1998.

    Google Scholar 

  5. C E. Shannon. A mathematical theory of communication. Bell Systems Technical Journal 27: 379–423 and 623-656, 1948.

    MathSciNet  MATH  Google Scholar 

  6. S. M. Stigler. Translation of Laplace’s 1774 memoir on “Probability of causes”. Statistical Science, 1(3): 359–378, 1986.

    MathSciNet  CrossRef  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2000 Springer-Verlag London

About this chapter

Cite this chapter

Lappalainen, H., Miskin, J.W. (2000). Ensemble Learning. In: Girolami, M. (eds) Advances in Independent Component Analysis. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0443-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0443-8_5

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-263-1

  • Online ISBN: 978-1-4471-0443-8

  • eBook Packages: Springer Book Archive