Methodology And Computing In Applied Probability

, Volume 1, Issue 3, pp 307–328

Langevin-Type Models II: Self-Targeting Candidates for MCMC Algorithms*

  • O. Stramer
  • R. L. Tweedie
Article

DOI: 10.1023/A:1010090512027

Cite this article as:
Stramer, O. & Tweedie, R.L. Methodology and Computing in Applied Probability (1999) 1: 307. doi:10.1023/A:1010090512027

Abstract

The Metropolis-Hastings algorithm for estimating a distribution π is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have π as the invariant measure. The traditional methods use candidates essentially unconnected to π. We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which “self-target” towards the high density areas of π, produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions.

Hastings algorithmsMetropolis algorithmsMarkov chain Monte CarlodiffusionsLangevin modelsdiscrete approximationsposterior distributionsirreducible Markov processesgeometric ergodicityuniform ergodicityGibbs sampling

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • O. Stramer
    • 1
  • R. L. Tweedie
    • 2
  1. 1.Department of Statistics and Actuarial ScienceUniversity of IowaIowa CityUSA
  2. 2.Division of BiostatisticsUniversity of MinnesotaMinneapolisUSA