Statistical Methods and Applications

, Volume 12, Issue 1, pp 49–60 | Cite as

A new strategy for speeding Markov chain Monte Carlo algorithms

  • Antonietta Mira
  • Daniel J. Sargent
Article

Abstract

Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.

Key words

Asymptotic variance Efficiency Gibbs sampler Metropolis Hastings algorithms Rao-Blackwellization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abrams D I, Goldman A I, Launer C, Korvick J A, Neaton J D, Crane L R et al. (1994) Comparative trial of didanosine and zalcitabine in patients with human immunodeficiency virus infection who are intolerant or have failed zidovudine therapy. New England Journal of Medicine330: 657–662CrossRefGoogle Scholar
  2. 2.
    Besag J, Green P J (1993) Spatial statistics and Bayesian computation (with discussion). J. Roy. Stat. Soc., Ser. B55: 25–37MATHMathSciNetGoogle Scholar
  3. 3.
    Casella G, Robert C P (1996) Rao-Blackwellisation of sampling schemes. Biometrika83: 81–94MATHMathSciNetCrossRefGoogle Scholar
  4. 4.
    Damien P, Wakefield J, Walker S (1999) Gibbs sampling for Bayesian nonconjugate and hierarchical models using auxiliary variables. J. Roy. Stat. Soc., Ser. B61: 331–344MATHMathSciNetCrossRefGoogle Scholar
  5. 5.
    Gelfand A E, Sahu S K, Carlin B P (1995) Efficient parametrizations for normal linear mixed models. Biometrika82: 479–488MATHMathSciNetCrossRefGoogle Scholar
  6. 6.
    Gelfand A E, Smith A F M (1990) Sampling based approaches to calculating marginal densities. Journal of the American Statistical Association85: 398–409MATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    Geyer C J, Thompson E A (1995) Annealing Markov chain Monte Carlo with applications to ancestral inference. J. Amer. Stat. Assoc.90: 909–920MATHCrossRefGoogle Scholar
  8. 8.
    Gilks W R, Roberts G O (1996) Strategies for improving MCMC. In: Gilks W R, Richardson S, Spiegelhalter D J (eds.) Markov Chain Monte Carlo in Practice, pp. 89–114. Chapman and Hall, LondonGoogle Scholar
  9. 9.
    Gilks W R, Roberts G O, Sahu S K (1998) Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Stat. Assoc.93: 1045–1054MATHMathSciNetCrossRefGoogle Scholar
  10. 10.
    Hastings W K (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika57: 97–109MATHCrossRefGoogle Scholar
  11. 11.
    Hills S E, Smith AFM (1992) Parametrization issues in Bayesian inference. In: Bernardo J M, Berger J O, Dawid, A P, Smith A F M (eds.) Bayesian statistics 4, pp. 641–649. Oxford University Press, OxfordGoogle Scholar
  12. 12.
    Hodges J S (1998) Some algebra and geometry for hierarchical models, applied to diagnostics (with discussion). J. Roy. Stat. Soc., Ser B60: 497–536MATHMathSciNetCrossRefGoogle Scholar
  13. 13.
    Kass R E, Carlin B P, Gelman A, Neal R (1998) Markov chain Monte Carlo in practice: a roundtable discussion. Amer. Stat.52: 93–100MathSciNetCrossRefGoogle Scholar
  14. 14.
    Liu J S, Wong W H, Kong A (1995) Correlation structure and convergence rate of the Gibbs sampler with various scans. J. Roy. Statist. Soc. Ser. B57: 157–169MATHMathSciNetGoogle Scholar
  15. 15.
    Mira A, Tierney L (2002) Efficiency and convergence properties of slice samplers. Scandinavian Journal of Stat.29: 1–12MATHMathSciNetCrossRefGoogle Scholar
  16. 16.
    Neal R M (1996) Sampling from multimodal distributions using tempered transitions. Stat. and Comp.6: 353–366CrossRefGoogle Scholar
  17. 17.
    Sargent D J, Hodges J S (1997) Smoothed ANOVA with application to subgroup analysis. Research Report 97-002, Division of Biostatistics, University of MinnesotaGoogle Scholar
  18. 18.
    Sargent D J, Hodges J S, Carlin B P (2000) Structured Markov chain Monte Carlo. Journal of the Computational and Graphical Statistics9: 217–234MathSciNetCrossRefGoogle Scholar
  19. 19.
    Spiegelhalter D J, Thomas A, Best N, Gilks W R (1995) BUGS: Bayesian Inference Using Gibbs Sampling, Version 0.50. Technical report, Medical Research Council Biostatistics Unit, Institute of Public Health, Cambridge UniversityGoogle Scholar
  20. 20.
    Swendsen R H, Wang J S (1987) Non-universal critical dynamics in Monte Carlo simulations. Phys. Rev. Letters58: 86–88CrossRefGoogle Scholar
  21. 21.
    Tierney L, Mira A (1999) Some adaptive Monte Carlo methods for Bayesian inference. Statistics in Medicine18: 2507–2515CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2003

Authors and Affiliations

  • Antonietta Mira
    • 1
  • Daniel J. Sargent
    • 2
  1. 1.Department of EconomicsUniversity of InsubriaVareseItaly
  2. 2.Mayo ClinicRochesterUSA

Personalised recommendations