Statistics and Computing

, Volume 8, Issue 4, pp 357–364 | Cite as

A guided walk Metropolis algorithm



The random walk Metropolis algorithm is a simple Markov chain Monte Carlo scheme which is frequently used in Bayesian statistical problems. We propose a guided walk Metropolis algorithm which suppresses some of the random walk behavior in the Markov chain. This alternative algorithm is no harder to implement than the random walk Metropolis algorithm, but empirical studies show that it performs better in terms of efficiency and convergence time.

Bayesian computation Markov Chain Monte Carlo Metropolis–Hastings algorithm 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Besag, J., Green, P., Higdon, D. and Mengersen, K. (1995) Bayesian computation and stochastic systems (with discussion). Statistical Science, 10, 3–36.Google Scholar
  2. Diaconis, P., Holmes, S. and Neal, R. M. (1997) Analysis of a non-reversible Markov chain sampler. Technical Report BU-1385-M, Biometrics Unit, Cornell University.Google Scholar
  3. Duane, S., Kennedy, A. D., Pendleton, B. J. and Roweth, D. (1987) Hybrid Monte Carlo. Physics Letters B, 195, 216–222.Google Scholar
  4. Gelfand, A. E. and Smith, A. F. M. (1990) Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398–409.Google Scholar
  5. Gelman, A., Roberts, G. and Gilks W. (1996) Efficient Metropolis jumping rules, in Bayesian Statistics 5, Berger, J. O., Bernardo, J. M., Dawid, A. P. and Smith, A. F. M. (eds), Oxford University Press.Google Scholar
  6. Geman, S. and Geman, D. (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE transactions on Pattern Analysis and Machine Intelligence, 6, 721–741.Google Scholar
  7. Greenhouse, J. and Wasserman, L. (1996) A practical robust method for Bayesian model selection: a case study in the analysis of clinical trials, in Bayesian Robustness Berger, J. O., Betro, B., Moreno, E., Pericchi, L. R., Ruggeri, F., Salinetti, G. and Wasserman, L. (eds), Institute of Mathematical Statistics Lecture Notes — Monograph Series, pp. 41–58.Google Scholar
  8. Hastings, W. K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.Google Scholar
  9. Horowitz, A. M. (1991) A generalized guided Monte Carlo algorithm. Physics Letters B, 268, 247–252.Google Scholar
  10. Kass, R. E., Carlin, B. P., Gelman, A. and Neal, R. M. (1998) Markov chain Monte Carlo in practice: a roundtable discussion. The American Statistician, 52, 93–100.Google Scholar
  11. Kuo, L. and Yang, T. Y. (1996) Bayesian computation for non-homogeneous Poisson processes in software reliability. Journal of the American Statistical Association, 91, 763–773.Google Scholar
  12. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953) Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21, 1087–1092.Google Scholar
  13. Muller, P. and Rios Insua, D. (1995) Issues in Bayesian analysis of neural network models. Working Paper 95–31, Institute of Statistics and Decision Sciences, Duke University.Google Scholar
  14. Muller, P. and Roeder, K. (1997) A Bayesian semiparametric model for case-control studies with errors in variables. Biometrika, 84, 523–537.Google Scholar
  15. Neal, R. M. (1993) Probabilistic inference using Markov Chain Monte Carlo methods. Technical Report CRG-TR–93–1, Department of Computer Science, University of Toronto.Google Scholar
  16. Neal, R. M. (1995) Suppressing random walks in Markov Chain Monte Carlo using ordered overrelaxation. Technical Report 9508, Department of Statistics, University of Toronto.Google Scholar
  17. Newton, M. A., Czado, C. and Chappell, R. (1996) Bayesian inference for semiparametric binary regression. Journal of the American Statistical Association, 91, 142–153.Google Scholar
  18. Roberts, G. O., Gelman, A. and Gilks, W. (1997) Weak convergence and optimal scaling of random walk Metropolis algorithms. Annals of Applied Probability, 7, 110–120.Google Scholar
  19. Roberts, G. O. and Rosenthal, J. S. (1998) Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society B, to appear.Google Scholar
  20. Sargent, D. J. (1997) A flexible approach to time-varying coefficients in the Cox regression setting. Lifetime Data Analysis, 3, 13–25.Google Scholar
  21. Sargent, D. J. (1998) A general framework for random effects survival analysis in the Cox proportional hazards setting. Biometrics, 54, to appear.Google Scholar
  22. Smith, A. F. M. and Roberts, G. O. (1994) Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods (with discussion). Journal of the Royal Statistical Society B, 55, 3–23.Google Scholar
  23. Tierney, L. (1994) Markov chains for exploring posterior distributions (with discussion). Annals of Statistics, 22, 1701–1762.Google Scholar
  24. Verdinelli and Wasserman (1998) Bayesian goodness of fit testing using infinite dimensional exponential families. Annals of Statistics, 26, 1215–1241.Google Scholar
  25. Waller, L. A., Carlin, B. P., Xia, H. and Gelfand, A. E. (1997) Hierarchical spatio-temporal mapping of disease rates. Journal of the American Statistical Association, 92, 607–617.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

    • 1
  1. 1.Department of StatisticsUniversity of British ColumbiaVancouverCanada

Personalised recommendations