Skip to main content

Convergence Rates of Attractive-Repulsive MCMC Algorithms

This article has been updated

Abstract

We consider MCMC algorithms for certain particle systems which include both attractive and repulsive forces, making their convergence analysis challenging. We prove that a version of these algorithms on a bounded state space is uniformly ergodic with explicit quantitative convergence rate. We also prove that a version on an unbounded state space is still geometrically ergodic, and then use the method of shift-coupling to obtain an explicit quantitative bound on its convergence rate.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Change history

  • 04 November 2021

    The 6th author name and abstract text was updated.

References

  1. Alder BJ, Wainwright TE (1959) Studies in molecular dynamics. I. General method. The J Chem Phys 31(2):459–466

  2. Aldous DJ, Thorisson H (1993) Shift-coupling. Stochastic Processes and their Applications 44(1):1–14

    MathSciNet  Article  Google Scholar 

  3. Andrieu Christophe, De Freitas Nando, Doucet Arnaud, Jordan Michael I (2003) An introduction to mcmc for machine learning. Mach Learn 50(1):5–43

    Article  Google Scholar 

  4. Asmussen S, Glynn PW, Thorisson H (1992) Stationarity detection in the initial transient problem. ACM Transactions on Modeling and Computer Simulation (TOMACS) 2(2):130–157

    Article  Google Scholar 

  5. Barker AA (1965) Monte Carlo calculations of the radial distribution functions for a proton-electron plasma. Aust J Phys 18(2):119–134

    Article  Google Scholar 

  6. Brooks S, Gelman A, Jones G, Meng XL (2011) Handbook of Markov chain Monte Carlo. CRC Press

  7. Brooks SP, Gelman A (1996) General methods for monitoring convergence of iterative simulations. J Comput Graph Stat 7(4):434–455

    MathSciNet  Google Scholar 

  8. Cohn Harry (1983) On the fluctuation of stochastically monotone Markov chains and some applications. J Appl Probab 20(1):178–184

    MathSciNet  Article  Google Scholar 

  9. Cowles MK, Carlin BP (1996) Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review. J Am Stat Assoc 91:883–904

    MathSciNet  Article  Google Scholar 

  10. Cowles MK, Roberts GO, Rosenthal JS (1999) Possible biases induced by MCMC convergence diagnostics. J Stat Comput Simul 64:87–104

    MathSciNet  Article  Google Scholar 

  11. Daley DJ (1968) Stochastically monotone Markov chains. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 10(4):305–317

    MathSciNet  Article  Google Scholar 

  12. Doeblin W (1938) Exposé de la théorie des chaınes simples constantes de markov á un nombre fini détats. Mathématique de l’Union Interbalkanique 2(77–105):78–80

    MATH  Google Scholar 

  13. Gelfand AE, Smith AFM (1990) Sampling-based approaches to calculating marginal densities. J Am Stat Assoc 85(410):398–409

  14. Gelman Andrew, Rubin Donald B (1992) Inference from iterative simulation using multiple sequences. Stat Sci 7(4):457–472

    MATH  Google Scholar 

  15. Geman Stuart, Geman Donald (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell 6(5–6):721–741

    Article  Google Scholar 

  16. Geyer CJ (1992) Practical Markov chain Monte Carlo. Stat Sci 7:473–483

    Google Scholar 

  17. Hammersley JM (1972) Stochastic models for the distribution of particles in space. Adv Appl Probab 4:47–68

    Article  Google Scholar 

  18. Hastings WK (1970) Monte Carlo sampling methods using Markov chain Monte Carlo. Biometrika 57:97–109

    MathSciNet  Article  Google Scholar 

  19. Jasra Ajay, Del Moral Pierre (2011) Sequential Monte Carlo methods for option pricing. Stoch Anal Appl 29(2):292–316

    MathSciNet  Article  Google Scholar 

  20. Jones Galin L, Hobert James P (2004) Sufficient Burn-in for Gibbs Samplers for a Hierarchical Random Effects Model. Ann Stat 32(2):784–817

    MathSciNet  Article  Google Scholar 

  21. Jones GL, Hobert JP (2001) Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statist. Sci. 16(4):312–334

    MathSciNet  Article  Google Scholar 

  22. Korteweg AG (2011) Markov chain Monte Carlo methods in corporate finance. Available at SSRN 1964923

  23. Krauth W (2021) Event-chain Monte Carlo: foundations, applications, and prospects. https://arxiv.org/abs/2102.07217

  24. Liggett TM (1978) Random invariant measures for Markov chains, and independent particle systems. Z. Warhschemlichkeitstheorie verw. Gebiete 45:297–313

    MathSciNet  Article  Google Scholar 

  25. Lund RB, Meyn SP, Tweedie RL (1996) Computable exponential convergence rates for stochastically ordered Markov processes. Ann. Appl. Probab. 6(1):218–237

    MathSciNet  Article  Google Scholar 

  26. Matthews P (1993) A slowly mixing Markov chain with implications for Gibbs sampling. Statistics and Probability Letters 17:231–236

    MathSciNet  Article  Google Scholar 

  27. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092

    Article  Google Scholar 

  28. Meyn SP, Tweedie RL (2012) Markov Chains and Stochastic Stability. Springer Science & Business Media

  29. Roberts GO, Rosenthal JS (1997) Shift-coupling and convergence rates of ergodic averages. Stoch Model 13(1):147–165

    MathSciNet  Article  Google Scholar 

  30. Roberts GO, Rosenthal JS (2004) General state space Markov chains and MCMC algorithms. Probab Surv 1:20–71

    MathSciNet  Article  Google Scholar 

  31. Roberts GO, Tweedie RL (1996) Geometric convergence and central limit theorems for multidimensional hastings and metropolis algorithms. Biometrika 83(1):95–110

    MathSciNet  Article  Google Scholar 

  32. Roberts GO, Tweedie RL (2000) Rates of convergence of stochastically monotone and continuous time Markov models. J Appl Probab 37(2):359–373

    MathSciNet  Article  Google Scholar 

  33. Rosenthal JS (1995) Minorization conditions and convergence rates for Markov chain Monte Carlo. J Am Stat Assoc 90:558–566

    MathSciNet  Article  Google Scholar 

  34. Rosenthal JS (2019) A First Look at Stochastic Processes. World Scientific Publishing Co.

  35. Rosenthal JS (2020) Point process MCMC JavaScript simulation. Available at: probability.ca/pointproc

  36. Ruslan S (2010) Learning deep Boltzmann machines using adaptive MCMC. ICML 2010 - Proceedings, 27th International Conference on Machine Learning, pages 943–950

  37. Sanjib S (2017) Markov chain Monte Carlo methods for bayesian data analysis in astronomy. Ann Rev Astron Astrophys 55(1):213–259

    Article  Google Scholar 

  38. Speagle JS (2020) A conceptual introduction to Markov chain Monte Carlo methods. https://arxiv.org/abs/1909.12313

  39. Valderrama-Bahamóndez G, Fröhlich H (2019) MCMC techniques for parameter estimation of ODE based models in systems biology. Front Appl Math Stat 5:55

  40. Chris W, Frederick AMIV (2015) Quantifying MCMC exploration of phylogenetic tree space. Syst Biol 64(3):472–491

    Article  Google Scholar 

Download references

Acknowledgements

We thank the editor and referee for very helpful suggestions which have led to many improvements of this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jeffrey S. Rosenthal.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y.H., Liu, T., Lou, Z. et al. Convergence Rates of Attractive-Repulsive MCMC Algorithms. Methodol Comput Appl Probab (2021). https://doi.org/10.1007/s11009-021-09909-y

Download citation

Keywords

  • Markov chain Monte Carlo
  • Convergence rate
  • Particle system
  • Shift coupling