Langevin-Type Models II: Self-Targeting Candidates for MCMC Algorithms*

  • O. Stramer
  • R. L. Tweedie
Article

Abstract

The Metropolis-Hastings algorithm for estimating a distribution π is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have π as the invariant measure. The traditional methods use candidates essentially unconnected to π. We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which “self-target” towards the high density areas of π, produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions.

Hastings algorithms Metropolis algorithms Markov chain Monte Carlo diffusions Langevin models discrete approximations posterior distributions irreducible Markov processes geometric ergodicity uniform ergodicity Gibbs sampling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. E. Besag, Comments on “Representations of knowledge in complex systems,” by U. Grenander and M. I. Miller, J. Roy. Statist. Soc. Ser. B 56, 1994.Google Scholar
  2. J. E. Besag and P. J. Green, “Spatial statistics and Bayesian computation (with discussion),” J. Roy. Statist. Soc. Ser. B vol. 55 pp. 25–38, 1993.Google Scholar
  3. J. E. Besag, P. J. Green, D. Higdon, and K. L. Mengersen, “Bayesian computation and stochastic systems (with discussion),” Statistical Science vol. 10 pp. 3–66, 1995Google Scholar
  4. J. D. Doll, P. J. Rossky, and H. L. Friedman, “Brownian dynamics as smart Monte Carlo simulation,” Journal of Chemical Physics vol. 69 pp. 4628–4633, 1978.Google Scholar
  5. A. Gelman, G. O. Roberts, and W. R. Gilks, Efficient Metropolis jumping rules, In Bayesian statistics 5, ed. J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, Oxford University Press: New York, 1995.Google Scholar
  6. W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov Chain Monte Carlo in practice, Chapman and Hall: London, 1996.Google Scholar
  7. W. K. Hastings, “Monte Carlo sampling methods using Markov chains and their applications,” Biometrika vol. 57 pp. 97–109, 1970.Google Scholar
  8. P. E. Kloeden and E. Platen, Numerical solution of stochastic differential equations, Springer-Verlag, Berlin, 1992.Google Scholar
  9. K. L. Mengersen and R. L. Tweedie, “Rates of convergence of the Hastings and Metropolis algorithms,” Annals of Statistics vol. 24 pp. 101–121, 1996.Google Scholar
  10. N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, “Equations of state calculations by fast computing machines,” J. Chemical Physics vol. 21 pp. 1087–1091, 1953.Google Scholar
  11. S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag: London, 1993.Google Scholar
  12. G. O. Roberts and R. L. Tweedie, “Exponential convergence of Langevin diffusions and their discrete approximations,” Bernoulli vol. 2 pp. 341–364, 1996.Google Scholar
  13. G. O. Roberts and R. L. Tweedie, “Geometric convergence and central limit theorems for multi-dimensional Hastings and Metropolis algorithms,” Biometrika vol. 83 pp. 95–110, 1996.Google Scholar
  14. A. F. M. Smith and G. O. Roberts, “Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods (with discussion),” J. Roy. Statist. Soc. Ser. B vol. 55 pp. 3–24, 1993.Google Scholar
  15. O. Stramer and R. L. Tweedie, Langevin-type models I: Diffusions with given stationary distributions and their discretizations. Methodology and Computing in Applied Probability vol. 1 pp. 283–306, 1999.Google Scholar
  16. L. Tierney, “Markov chains for exploring posterior distributions (with discussion),” Ann. Statist. vol. 22 pp. 1701–1762, 1994.Google Scholar
  17. P. Tuominen and R. L. Tweedie, “Subgeometric rates of convergence of f-ergodic Markov chains,” Adv. Appl. Probab. vol. 26 pp. 775–798, 1994.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • O. Stramer
    • 1
  • R. L. Tweedie
    • 2
  1. 1.Department of Statistics and Actuarial ScienceUniversity of IowaIowa CityUSA
  2. 2.Division of BiostatisticsUniversity of MinnesotaMinneapolisUSA

Personalised recommendations