Abstract
The preceding three chapters have examined the meaning of Bayesian neural network models, showed how these models can be implemented by Markov chain Monte Carlo methods, and demonstrated that such an implementation can be applied in practice to problems of moderate size, with good results. In this concluding chapter, I will review what has been accomplished in these areas, and describe on-going and potential future work to extend these results, both for neural networks and for other flexible Bayesian models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer Science+Business Media New York
About this chapter
Cite this chapter
Neal, R.M. (1996). Conclusions and Further Work. In: Bayesian Learning for Neural Networks. Lecture Notes in Statistics, vol 118. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-0745-0_5
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0745-0_5
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-94724-2
Online ISBN: 978-1-4612-0745-0
eBook Packages: Springer Book Archive