Introduction to Monte Carlo Methods

  • D. J. C. Mackay
Part of the NATO ASI Series book series (ASID, volume 89)


This chapter describes a sequence of Monte Carlo methods: importance sampling, rejection sampling, the Metropolis method, and Gibbs sampling. For each method, we discuss whether the method is expected to be useful for high—dimensional problems such as arise in inference with graphical models. After the methods have been described, the terminology of Markov chain Monte Carlo methods is presented. The chapter concludes with a discussion of advanced methods, including methods for reducing random walk behaviour.

For details of Monte Carlo methods, theorems and proofs and a full list of references, the reader is directed to Neal (1993), Gilks, Richardson and Spiegelhalter (1996), and Tanner (1996).


Markov Chain Monte Carlo Method Ising Model Gibbs Sampling Importance Sampling 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Adler, S. L.: 1981, Over-relaxation method for the Monte-Carlo evaluation of the partition function for multiquadratic actions, Physical Review D-Particles and Fields 23(12), 2901–2904.CrossRefGoogle Scholar
  2. Cowles, M. K. and Carlin, B. P.: 1996, Markov-chain Monte-Carlo convergence diagnostics — a comparative review, Journal of the American Statistical Association 91(434), 883–904.MathSciNetzbMATHCrossRefGoogle Scholar
  3. Gilks, W. and Wild, P.: 1992, Adaptive rejection sampling for Gibbs sampling, Applied Statistics 41, 337–348.zbMATHCrossRefGoogle Scholar
  4. Gilks, W. R., Richardson, S. and Spiegelhalter, D. J.: 1996, Markov Chain Monte Carlo in Practice, Chapman and Hall.zbMATHGoogle Scholar
  5. Green, P. J.: 1995, Reversible jump Markov chain Monte Carlo computation and Bayesian model determination, Biometrika 82, 711–732.MathSciNetzbMATHCrossRefGoogle Scholar
  6. Marinari, E. and Parisi, G.: 1992, Simulated tempering–a new Monte-Carlo scheme, Europhysics Letters 19(6), 451–458.CrossRefGoogle Scholar
  7. Neal, R. M.: 1993, Probabilistic inference using Markov chain Monte Carlo methods, Technical Report CRG-TR-93–1, Dept. of Computer Science, University of Toronto.Google Scholar
  8. Neal, R. M.: 1995, Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation, Technical Report 9508, Dept. of Statistics, University of Toronto.Google Scholar
  9. Neal, R. M.: 1996, Bayesian Learning for Neural Networks, number 118 in Lecture Notes in Statistics, Springer, New York.Google Scholar
  10. Propp, J. G. and Wilson, D. B.: 1996, Exact sampling with coupled Markov chains and applications to statistical mechanics, Random Structures and Algorithms 9(1–2), 223–252.MathSciNetzbMATHCrossRefGoogle Scholar
  11. Tanner, M. A.: 1996, Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions, Springer Series in Statistics, 3rd edn, Springer Verlag.zbMATHGoogle Scholar
  12. Thomas, A., Spiegelhalter, D. J. and Gilks, W. R.: 1992, BUGS: A program to perform Bayesian inference using Gibbs sampling, inJ. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith (eds), Bayesian Statistics 4, Clarendon Press, Oxford, pp. 837–842.Google Scholar
  13. Yeomans, J.: 1992, Statistical mechanics of phase transitions, Clarendon Press, Oxford.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • D. J. C. Mackay
    • 1
  1. 1.Department of Physics, Cavendish LaboratoryCambridge UniversityCambridgeUK

Personalised recommendations