Statistics and Computing

, Volume 8, Issue 2, pp 115–124 | Cite as

A simulation approach to convergence rates for Markov chain Monte Carlo algorithms



Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis–Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. A method has previously been developed to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in this theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it cannot provide the guarantees offered by analytical proof.

Drift condition Gibbs sampler Markov chain Monte Carlo Metropolis–Hastings algorithm minorization condition ordinal probit variance components 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Albert, J. H. and Chib, S. (1993) Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88, 669–79.Google Scholar
  2. Baxendale, P. H. (1994) Uniform Estimates for Geometric Ergodicity of Recurrent Markov Chains, Technical Report, Department of Mathematics, University of Southern California.Google Scholar
  3. Box, G. E. P. and Tiao, G. C. (1973) Bayesian Inference in Statistical Analysis, New York: Wiley.Google Scholar
  4. Carlin, B. P. and Polson, N. G. (1992) Monte Carlo Bayesian methods for discrete regression models and categorical time series. In J. M. Bernardo, J. Berger, A. P. Dawid and C. J. Geyer et al. (eds) Bayesian Statistics, Vol. 4 pp. 577–86, Oxford University Press.Google Scholar
  5. Cowles, M. K. (1996) Accelerating Monte Carlo Markov chain convergence for cumulative-link generalized linear models. Statistics and Computing, 6, 101–11.Google Scholar
  6. Cowles, M. K. and Carlin, B. P. (1996) Markov Chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association, in press.Google Scholar
  7. Davies, O. L. (1967) Statistical Methods in Research and Production, 3rd edn, London: Oliver and Boyd.Google Scholar
  8. Frieze, A., Kannan, R. and Polson, N. G. (1993) Sampling from log-concave distributions. Annals of Applied Probability, 4, 812–37.Google Scholar
  9. Frigessi, A., Hwang, C.-R., Sheu, S. J. and Di Stefano, P. (1993) Convergence rates of the Gibbs sampler, the Metropolis algorithm, and other single-site updating dynamics. Journal of the Royal Statistical Society, Series B, 55, 205–20.Google Scholar
  10. Garren, S. T. and Smith, R. L. (1995) Estimating the Second Largest Eigenvalue of a Markov Transition Matrix, Research Report No. 95–18, Statistical Laboratory, University of Cambridge.Google Scholar
  11. Gelfand, A. E., Hills, S. E., Racine-Poon, A. and Smith, A.F.M. (1990) Illustration of Bayesian inference in normal data models using Gibbs sampling. Journal of the American Statistical Association, 85, 972–85.Google Scholar
  12. Gelman, A. E. and Rubin, D. B. (1992) Inference from iterative simulation using multiple sequences. Statistical Science, 7(4) 457–72.Google Scholar
  13. Gelfand, A. E. and Smith, A. F. M. (1990) Sampling based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398–409.Google Scholar
  14. Geman, S. and Geman, D. (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–41.Google Scholar
  15. Geweke, J. (1992) Evaluating the accuracy of sampling based approaches to the calculation of posterior moments. In J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith (eds) Bayesian Statistics, vol. 4, pp. 169–93, Oxford University Press.Google Scholar
  16. Geyer, C. J. (1992) Practical Markov Chain Monte Carlo. Statistical Science, 7(4), 473–83.Google Scholar
  17. Geyer, C. J. (1994) Geometric ergodicity of the block Gibbs sampler for a simple hierarchical model. Unpublished manuscript.Google Scholar
  18. Hastings, W. K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.Google Scholar
  19. Ingrassia, S. (1994) On the rate of convergence of the Metropolis algorithm and Gibbs sampler by geometric bounds. Annals of Applied Probability, 4, 347–89.Google Scholar
  20. Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A. and Teller, E. (1953) Equations of state calculations by fast computing machines. Journal of Chemistry and Physics 21, 1087–91.Google Scholar
  21. Meyn, S. P. and Tweedie, R. L. (1993) Markov Chains and Stochastic Stability. London: Springer-Verlag.Google Scholar
  22. Meyn, S. P. and Tweedie, R. L. (1994) Computable bounds for convergence rates of Markov chains. Annals of Applied Probability, 4, 981–1011.Google Scholar
  23. Morris, C. (1983) Parametric empirical Bayes confidence intervals. Scientific Inference, Data Analysis and Robustness, 25–50.Google Scholar
  24. Raftery, A. E. and Lewis, S. (1992) How many iterations in the Gibbs sampler? In J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith (eds), Bayesian Statistics, Vol. 4, pp. 763–73, Oxford University Press.Google Scholar
  25. Roberts, G. O. (1992) Convergence diagnostics of the Gibbs sampler. In J. M. Bernardo, J. Berger, A. P. Dawid and A. F. M. Smith (eds), Bayesian Statistics, Vol. 4 pp. 777–84, Oxford University Press.Google Scholar
  26. Roberts, G. O. and Tweedie, R. L. (1994) Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika, in press.Google Scholar
  27. Rosenthal, J. S. (1995a) Rates of convergence for Gibbs sampling for variance components models. Annals of Statistics, 23, 740–61.Google Scholar
  28. Rosenthal, J. S. (1995b) Minorization conditions and convergence rates for Markov Chain Monte Carlo. Journal of the American Statistical Association, 90, 558–66, correction, p. 1136.Google Scholar
  29. Rosenthal, J. S. (1996) Analysis of the Gibbs sampler for a model related to James-Stein Estimators. Statistics and Computing, 6, 269–75.Google Scholar
  30. SAS Institute (1990) The Logistic Procedure. SAS/STAT User's Guide, Vol. 2, Version 6, 4th edn, pp. 1071–126. Cary, NC: SAS Institute.Google Scholar
  31. Smith, A. F. M. and Roberts, G. O (1993) Bayesian computation via the Gibbs sampler and related Markov Chain Monte Carlo methods (with discussion) Journal of the Royal Statistical Society, Series B, 55, 3–24.Google Scholar
  32. Spivak, M. (1980) Calculus, 2nd edn. Wilmington, DE: Publish or Perish Inc.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

    • 1
    • 2
  1. 1.Department of BiostatisticsHarvard School of Public HealthBostonUSA
  2. 2.Department of StatisticsUniversity of TorontoTorontoCanada

Personalised recommendations