Advertisement

Statistics and Computing

, Volume 26, Issue 4, pp 761–781 | Cite as

A computational procedure for estimation of the mixing time of the random-scan Metropolis algorithm

  • David A. SpadeEmail author
Article

Abstract

Many situations, especially in Bayesian statistical inference, call for the use of a Markov chain Monte Carlo (MCMC) method as a way to draw approximate samples from an intractable probability distribution. With the use of any MCMC algorithm comes the question of how long the algorithm must run before it can be used to draw an approximate sample from the target distribution. A common method of answering this question involves verifying that the Markov chain satisfies a drift condition and an associated minorization condition (Rosenthal, J Am Stat Assoc 90:558–566, 1995; Jones and Hobert, Stat Sci 16:312–334, 2001). This is often difficult to do analytically, so as an alternative, it is typical to rely on output-based methods of assessing convergence. The work presented here gives a computational method of approximately verifying a drift condition and a minorization condition specifically for the symmetric random-scan Metropolis algorithm. Two examples of the use of the method described in this article are provided, and output-based methods of convergence assessment are presented in each example for comparison with the upper bound on the convergence rate obtained via the simulation-based approach.

Keywords

Markov chain Monte Carlo Mixing time Drift and minorization Bayesian inference Computational statistics 

Notes

Acknowledgments

The author would like to thank Radu Herbei and Laura Kubatko, as well as two anonymous reviewers, for their helpful insights and commentary during the completion of this project.

References

  1. Chen, R.: Convergence analysis and comparisons of Markov chain Monte Carlo algorithms in digital communications. IEEE Trans. Signal Process. 50, 255–270 (2002)CrossRefGoogle Scholar
  2. Chib, S., Nardari, F., Shephard, N.: Markov chain Monte Carlo methods for generalized stochastic volatility models. J. Econ. 108, 281–316 (1998)Google Scholar
  3. Cowles, M.K., Rosenthal, J.S.: A simulation-based approach to convergence rates for Markov chain Monte Carlo algorithms. Stat. Comput. 8, 115–124 (1998)CrossRefGoogle Scholar
  4. Deller, S.C., Amiel, L., Deller, M.: Model uncertainty in ecological criminology: an application of Bayesian model averaging with rural crime data. Int. J. Crim. Soc. Theory 4(2), 683–717 (2011)Google Scholar
  5. Eraker, B.: MCMC analysis of diffusion models with applications to finance. J. Bus. Econ. Stat. 19–2, 177–191 (2001)MathSciNetCrossRefGoogle Scholar
  6. Fort, G., Moulines, E., Roberts, G.O., Rosenthal, J.S.: On the geometric ergodicity of hybrid samplers. J. Appl. Probab. 40, 123–146 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  7. Gelman, A., Gilks, W.R., Roberts, G.O.: Weak convergence and optimal scaling of random walk metropolis algorithms. Ann. Appl. Probab. 7(1), 110–120 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  8. Gelman, A., Rubin, D.B.: Inference from iterative simulation using multiple sequences. Stat. Sci. 7, 457–511 (1992)CrossRefGoogle Scholar
  9. Geweke, J.: Bayesian Statistics. Oxford University Press, Oxford (1992)zbMATHGoogle Scholar
  10. Heidelberger, P., Welch, P.D.: Simulation run length control in the presence of an initial transient. Oper. Res. 31, 1109–1144 (1983)CrossRefzbMATHGoogle Scholar
  11. Hobert, J.P., Geyer, C.J.: Geometric ergodicity of Gibbs and block Gibbs samplers for a hierarchical random effects model. J. Multivar. Anal. 67, 414–430 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  12. Ishwaran, H., James, L.F., Sun, J.: Bayesian model selection in finite mixtures by marginal density decompositions. J. Am. Stat. Assoc. 96, 1316–1332 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  13. Jarner, S.F., Hansen, E.: Geometric ergodicity of metropolis algorithms. Stoch. Process. Appl. 85, 341–361 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  14. Jones, G., Hobert, J.P.: Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Stat. Sci. 16(4), 312–334 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  15. Jones, G., Hobert, J.P.: Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. Ann. Stat. 32(2), 734–817 (2004)MathSciNetzbMATHGoogle Scholar
  16. Li, S., Pearl, D.K., Doss, H.: Phylogenetic tree construction using Markov chain Monte Carlo. J. Am. Stat. Assoc. 95, 493–508 (2000)CrossRefGoogle Scholar
  17. Liang, F.: Continuous contour Monte Carlo for marginal density estimation with an application to a spatial statistical model. J. Comput. Gr. Stat. 16(3), 608–632 (2007)MathSciNetCrossRefGoogle Scholar
  18. Madras, N., Sezer, D.: Quantitative bounds for Markov chain convergence: Wasserstein and total variation distances. Bernoulli 16(3), 882–908 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  19. Neal, R.M.: Annealed importance sampling. Technical Report, Department of Statistics, University of Toronto (1998)Google Scholar
  20. Oh, M., Berger, J.O.: Adaptive importance sampling in Monte Carlo integration. Technical Report, Department of Statistics, Purdue University (1989)Google Scholar
  21. Robert, C.P., Casella, G.: Monte Carlo Statistical Methods, 2nd edn. Springer, New York (2004)CrossRefzbMATHGoogle Scholar
  22. Roberts, G.O., Rosenthal, J.S.: Two convergence properties of hybrid samplers. Ann. Appl. Probab. 8, 397–407 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  23. Roberts, G.O., Rosenthal, J.S.: Geometric ergodicity and hybrid Markov chains. Electron. Commun. Probab. 2(2), 13–25 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  24. Roberts, G.O., Tweedie, R.L.: Geometric convergence and central limit theorems for multidimensional hastings and metropolis algorithms. Biometrika 83(1), 95–110 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  25. Roberts, G.O., Rosenthal, J.S.: On convergence rates of Gibbs samplers for uniform distributions. Ann. Appl. Probab. 8, 1291–1302 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  26. Rosenthal, J.S.: Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Am. Stat. Assoc. 90, 558–566 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  27. Sonksen, M.D., Wang, X., Umland, K.: Bayesian partially ordered multinomial probit and logit models with an application to course redesign (2013)Google Scholar
  28. Tierney, L.: Markov chains for exploring posterior distributions (with discussion). Ann. Stat. 22, 1701–1762 (1994)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.University of Missouri–Kansas CityKansas CityUSA

Personalised recommendations