Abstract
This book develops the Bayesian approach to learning for neural networks by examining the meaning of the prior distributions that are the starting point for Bayesian learning, by showing how the computations required by the Bayesian approach can be performed using Markov chain Monte Carlo methods, and by evaluating the effectiveness of Bayesian methods on several real and synthetic data sets. This work has practical significance for modeling data with neural networks. From a broader perspective, it shows how the Bayesian approach can be successfully applied to complex models, and in particular, challenges the common notion that one must limit the complexity of the model used when the amount of training data is small. I begin here by introducing the Bayesian framework, discussing past work on applying it to neural networks, and reviewing the basic concepts of Markov chain Monte Carlo implementation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Neal, R.M. (1996). Introduction. In: Bayesian Learning for Neural Networks. Lecture Notes in Statistics, vol 118. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0745-0_1
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0745-0_1
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-94724-2
Online ISBN: 978-1-4612-0745-0
eBook Packages: Springer Book Archive