Nonasymptotic Bounds on the Mean Square Error for MCMC Estimates via Renewal Techniques

  • Krzysztof Łatuszyński
  • Błażej Miasojedow
  • Wojciech Niemiro
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 23)

Abstract

The Nummellin’s split chain construction allows to decompose a Markov chain Monte Carlo (MCMC) trajectory into i.i.d. “excursions”. Regenerative MCMC algorithms based on this technique use a random number of samples. They have been proposed as a promising alternative to usual fixed length simulation (Hobert et al., Biometrika 89:731–743, 2002; Mykland et al., J. Am. Statist. Assoc. 90:233–241, 1995; Rosenthal, J. Amer. Statist. Association 90:558–566, 1995). In this note we derive nonasymptotic bounds on the mean square error (MSE) of regenerative MCMC estimates via techniques of renewal theory and sequential statistics. These results are applied to construct confidence intervals. We then focus on two cases of particular interest: chains satisfying the Doeblin condition and a geometric drift condition. Available explicit nonasymptotic results are compared for different schemes of MCMC simulation.

References

  1. 1.
    Y.F. Atchade, F. Perron (2007): On the geometric ergodicity of Metropolis-Hastings algorithms. Statistics 41, 77–84.Google Scholar
  2. 2.
    K.B. Athreya and P. Ney (1978): A new approach to the limit theory of recurrent Markov chains, Trans. Amer. Math. Soc. 245, 493–501.Google Scholar
  3. 3.
    P.H. Baxendale (2005): Renewal Theory and Computable Convergence Rates for Geometrically Ergodic Markov Chains. Ann. Appl. Prob. 15, 700–738.Google Scholar
  4. 4.
    W. Bednorz, R. Latała and K. Łatuszyński (2008): A Regeneration Proof of the Central Limit Theorem for Uniformly Ergodic Markov Chains. Elect. Comm. in Probab. 13, 85–98.Google Scholar
  5. 5.
    L.A. Breyer and G.O. Roberts (2001): Catalytic perfect simulation. Methodol. Comput. Appl. Probab. 3 161–177.Google Scholar
  6. 6.
    Y.S. Chow and H. Teicher (1988):Probability Theory, Independence, Interchangeability, Martingales. Second Edition, Springer Verlag.Google Scholar
  7. 7.
    G. Fort and E. Moulines (2000): V-subgeometric ergodicity for a Hastings–Metropolis algorithm. Statist. Probab. Lett. 49, 401–410.Google Scholar
  8. 8.
    G. Fort, E. Moulines, G.O. Roberts, and J.S. Rosenthal (2003): On the geometric ergodicity of hybrid samplers. J. Appl. Probab. 40 (1), 123–146.Google Scholar
  9. 9.
    W.R. Gilks, S. Richardson, D.J. Spiegelhalter: Markov chain Monte Carlo in practice. Chapman & Hall, 1998.Google Scholar
  10. 10.
    P.W. Glynn and D. Ormoneit (2002): Hoeffding’s inequality for uniformly ergodic Markov chains, Statist. Probab. Lett. 56, 143–146.Google Scholar
  11. 11.
    O. Häggström J.S. Rosenthal (2007): On variance conditions for Markov chain CLTs. Elect. Comm. in Probab. 12 , 454–464.Google Scholar
  12. 12.
    J.P. Hobert and C.J. Geyer (1998): Geometric ergodicity of Gibbs and block Gibbs samplers for Hierarchical Random Effects Model. J. Multivariate Anal. 67, 414–439.Google Scholar
  13. 13.
    J.P. Hobert, G.L. Jones, B. Presnell, and J.S. Rosenthal (2002): On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo. Biometrika 89, 731–743.Google Scholar
  14. 14.
    J.P. Hobert and C.P. Robert (2004): A mixture representation of π with applications in Markov chain Monte Carlo and perfect sampling. Ann. Appl. Probab. 14 1295–1305.Google Scholar
  15. 15.
    M.R. Jerrum, L.G. Valiant, V.V. Vazirani (1986): Random generation of combinatorial structures fro, a uniform distribution. Theoretical Computer Science 43, 169–188.Google Scholar
  16. 16.
    G.L. Jones, J.P. Hobert (2004): Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. Ann. Statist. 32, pp. 784–817.Google Scholar
  17. 17.
    A.A. Johnson and G.L. Jones (2010): Gibbs sampling for a Bayesian hierarchical general linear model. Electronic J. Statist. 4, 313–333.Google Scholar
  18. 18.
    I. Kontoyiannis, L. Lastras-Montano, S.P. Meyn (2005): Relative Entropy and Exponential Deviation Bounds for General Markov Chains. 2005 IEEE International Symposium on Information Theory. Google Scholar
  19. 19.
    K. Łatuszyński, B. Miasojedow and W. Niemiro (2009): Nonasymptotic bounds on the estimation error for regenerative MCMC algorithms. arXiv:0907.4915v1Google Scholar
  20. 20.
    K. Łatuszyński, W. Niemiro (2011): Rigorous confidence bounds for MCMC under a geometric drift condition. J. of Complexity 27, 23–38.Google Scholar
  21. 21.
    G. Lorden: On excess over the boundary. Ann. Math. Statist. 41, 520–527, 1970.Google Scholar
  22. 22.
    K.L. Mengersen, L.R. Tweedie (1996): Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24, 1, 101–121.Google Scholar
  23. 23.
    S.P. Meyn and R.L. Tweedie: Markov Chains and Stochastic Stability. Springer-Verlag, 1993.Google Scholar
  24. 24.
    P. Mykland, L. Tierney and B. Yu (1995): Regeneration in Markov chain samplers. J. Am. Statist. Assoc.., 90, 233–241.Google Scholar
  25. 25.
    R. Neath, G.L. Jones (2009): Variable-at-a-time implementation of Markov chain Monte Carlo. Preprint. arXiv:0903.0664v1Google Scholar
  26. 26.
    W. Niemiro, P. Pokarowski (2009): Fixed precision MCMC Estimation by Median of Products of Averages. J. Appl. Probab. 46 (2), 309–329.Google Scholar
  27. 27.
    E. Nummelin (1978): A splitting technique for Harris recurrent Markov chains, Z. Wahr. Verw. Geb. 43, 309–318.Google Scholar
  28. 28.
    E. Nummelin (2002): MC’s for MCMC’ists, International Statistical Review, 70, 215–240.Google Scholar
  29. 29.
    C.P. Robert and G. Casella: Monte Carlo Statistical Methods. Springer-Verlag, New York, 2004.Google Scholar
  30. 30.
    G.O. Roberts and J.S. Rosenthal (1997): Geometric ergodicity and hybrid Markov chains. Elec. Comm. Prob. 2 (2).Google Scholar
  31. 31.
    G.O. Roberts and J.S. Rosenthal (2004): General state space Markov chains and MCMC algorithms. Probability Surveys 1, 20–71.Google Scholar
  32. 32.
    J.S. Rosenthal (1995): Minorization conditions and convergence rates for Markov chains. J. Amer. Statist. Association 90, 558–566.Google Scholar
  33. 33.
    D. Rudolf (2008): Explicit error bounds for lazy reversible Markov chain Monte Carlo. J. of Complexity. 25, 11–24.Google Scholar
  34. 34.
    V. Roy, J.P. Hobert (2010): On Monte Carlo methods for Bayesian multivariate regression models with heavy-tailed errors. J. Multivariate Anal. 101, 1190–1202Google Scholar
  35. 35.
    D.B. Wilson (2000): How to couple from the past using a read-once source of randomness. Random Structures Algorithms 16 (1), 85–113.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Krzysztof Łatuszyński
    • 1
  • Błażej Miasojedow
    • 2
  • Wojciech Niemiro
    • 3
  1. 1.Department of StatisticsUniversity of WarwickCoventryUK
  2. 2.Institute of Applied Mathematics and MechanicsUniversity of WarsawWarszawaPoland
  3. 3.Faculty of Mathematics and Computer ScienceNicolaus Copernicus UniversityToruńPoland

Personalised recommendations