Langevin-Type Models I: Diffusions with Given Stationary Distributions and their Discretizations*

  • O. Stramer
  • R. L. Tweedie
Article

Abstract

We describe algorithms for estimating a given measure π known up to a constant of proportionality, based on a large class of diffusions (extending the Langevin model) for which π is invariant. We show that under weak conditions one can choose from this class in such a way that the diffusions converge at exponential rate to π, and one can even ensure that convergence is independent of the starting point of the algorithm. When convergence is less than exponential we show that it is often polynomial at verifiable rates. We then consider methods of discretizing the diffusion in time, and find methods which inherit the convergence rates of the continuous time process. These contrast with the behavior of the naive or Euler discretization, which can behave badly even in simple cases. Our results are described in detail in one dimension only, although extensions to higher dimensions are also briefly described.

Markov chain Monte Carlo diffusions Langevin models posterior distributions irreducible Markov processes exponential ergodicity uniform ergodicity Euler schemes 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. E. Besag and P. J. Green, “Spatial statistics and Bayesian computation (with discussion),” J. Roy. Statist. Soc. Ser. B vol. 55 pp. 25–38, 1993.Google Scholar
  2. J. E. Besag, P. J. Green, D. Higdon, and K. L. Mengersen, “Bayesian computation and stochastic systems (with discussion),” Statistical Science vol. 10 pp. 3–66, 1995.Google Scholar
  3. J. D. Doll, P. J. Rossky, and H. L. Friedman, “Brownian dynamics as smart Monte Carlo simulation,” Journal of Chemical Physics vol. 69 pp. 4628–4633, 1978.Google Scholar
  4. D. Down, S. P. Meyn, and R. L. Tweedie, “Exponential and uniform ergodicity of Markov processes,” Ann. Probab. vol. 23 pp. 1671–1691, 1995.Google Scholar
  5. S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth, “Hybrid Monte Carlo,” Physics Letters B vol. 195 pp. 216–222, 1987.Google Scholar
  6. U. Grenander and M. I. Miller, “Representations of knowledge in complex systems (with discussion),” J. Roy. Statist. Soc. Ser. B vol. 56 pp. 549–603, 1994.Google Scholar
  7. C. R. Hwang, S. Y. Hwang-Ma, and S. J. Sheu, “Accelerating Gaussian diffusions,” Ann. Appl. Probab. vol. 3 pp. 897–913, 1993.Google Scholar
  8. Ioannis Karatzas and Steven E. Shreve, Brownian Motion and Stochastic Calculus, Springer-Verlag: New York, 1991.Google Scholar
  9. J. Kent, “Time-revesible diffusions,” Adv. Appl. Probab. vol. 10 pp. 819–835, 1978.Google Scholar
  10. P. E. Kloeden and E. Platen, Numerical solution of stochastic differential equations, Springer-Verlag: Berlin, 1992.Google Scholar
  11. K. L. Mengersen and R. L. Tweedie, “Rates of convergence of the Hastings and Metropolis algorithms,” Annals of Statistics vol. 24 pp. 101–121, 1996.Google Scholar
  12. S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag: London, 1993.Google Scholar
  13. S. P. Meyn and R. L. Tweedie, “Stability of Markovian processes II: Continuous time processes and sampled chains,” Adv. Appl. Probab. vol. 25 pp. 487–517, 1993.Google Scholar
  14. S. P. Meyn and R. L. Tweedie, “Stability of Markovian processes III: Foster-Lyapunov criteria for continuous time processes,” Adv. Appl. Probab. vol. 25 pp. 518–548, 1993.Google Scholar
  15. T. Ozaki, “A bridge between nonlinear time series models and nonlinear stochastic dynamical systems: A local linearization approach,” Statistica Sinica vol. 2 pp. 113–135, 1992.Google Scholar
  16. M. Pollak and D. Siegmund, “A diffusion process and its applications to detecting a change in the drift of Brownian motion,” Biometrika vol. 72 pp. 207–216, 1985.Google Scholar
  17. G. O. Roberts and R. L. Tweedie, “Exponential convergence of Langevin diffusions and their discrete approximations,” Bernoulli vol. 2 pp. 341–364, 1996.Google Scholar
  18. G. O. Roberts and R. L. Tweedie, “Geometric convergence and central limit theorems for multi-dimensional Hastings and Metropolis algorithms,” Biometrika vol. 83 pp. 95–110, 1996.Google Scholar
  19. I. Shoji, Approximation of continuous time stochastic processes by a local linearization method, Technical report, The Institute of Statistical Mathematics, Tokyo, 1995.Google Scholar
  20. I. Shoji and T. Ozaki, “A statistical method of estimation and simulation for systems of stochastic differential equations,” Biometrika vol. 85 pp. 240–243, 1998.Google Scholar
  21. A. F. M. Smith and G. O. Roberts, “Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods (with discussion),” J. Roy. Statist. Soc. Ser. B vol. 55 pp. 3–24, 1993.Google Scholar
  22. O. Stramer and R. L. Tweedie, Langevin-type models II: Self-targeting candidates for MCMC algorithms, Methodology and Computing in Applied Probability vol. 1 pp. 307–328, 1999.Google Scholar
  23. O. Stramer and R. L. Tweedie, “Existence and stability of weak solutions to stochastic differential equations with non-smooth coefficients,” Statistica Sinica vol. 7 pp. 577–593, 1997.Google Scholar
  24. D. W. Stroock and S. R. S. Varadhan, Multidimensional Diffusion Processes, Springer-Verlag: Berlin, 1979.Google Scholar
  25. L. Tierney, “Markov chains for exploring posterior distributions (with discussion),” Ann. Statist. vol. 22 pp. 1701–1762, 1994.Google Scholar
  26. D. Toussaint, “Introduction to algorithms for Monte Carlo simulations and their applications to QCD,” Computer Physics Communications vol. 56 pp. 69–92, 1989.Google Scholar
  27. P. Tuominen and R. L. Tweedie, “Subgeometric rates of convergence of f-ergodic Markov chains,” Adv. Appl. Probab. vol. 26 pp. 775–798, 1994.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • O. Stramer
    • 1
  • R. L. Tweedie
    • 2
  1. 1.Department of Statistics and Actuarial ScienceUniversity of IowaIowa CityUSA
  2. 2.Division of BiostatisticsUniversity of MinnesotaMinneapolisUSA

Personalised recommendations