Abstract
We investigate the setting in which Monte Carlo methods are used and draw a parallel to the formal setting of statistical inference. In particular, we find that Monte Carlo approximation gives rise to a bias-variance dilemma. We show that it is possible to construct a biased approximation scheme with a lower approximation error than a related unbiased algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195:216–222, 1987.
W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in Practice. Chapman&Hall, 1996.
W. Hardle. Applied nonparametric regression. Cambridge University Press, 1990.
A. M. Horowitz. A generalized guided Monte Carlo algorithm. Physics Letters B, 268:247–252, 1991.
R. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, 1996.
R. M. Neal. Annealed importance sampling. Technical Report No. 9805, Dept. of Statistics, University of Toronto, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mark, Z., Baram, Y. (2001). The Bias-Variance Dilemma of the Monte Carlo Method. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_20
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive