Local consistency of Markov chain Monte Carlo methods

Article

Abstract

In this paper, we introduce the notion of efficiency (consistency) and examine some asymptotic properties of Markov chain Monte Carlo methods. We apply these results to the data augmentation (DA) procedure for independent and identically distributed observations. More precisely, we show that if both the sample size and the running time of the DA procedure tend to infinity, the empirical distribution of the DA procedure tends to the posterior distribution. This is a local property of the DA procedure, which may be, in some cases, more helpful than the global properties to describe its behavior. The advantages of using the local properties are the simplicity and the generality of the results. The local properties provide useful insight into the problem of how to construct efficient algorithms.

Keywords

Monte Carlo Markov chain Asymptotic normality 

References

  1. Belloni, A., Chernozhukov, V. (2009). On the computational complexity of MCMC-based estimators in large samples. Annals of Statistics, 37(4), 2011–2055.Google Scholar
  2. Dempster, A. P., Laird, N. M., Rubin, D. B. (1977). Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1), 1–38.Google Scholar
  3. Ito, K. (2004). Stochastic Processes: Lectures Given at Aarhus University. Berlin: Springer.Google Scholar
  4. Kamatani, K. (2010). Metropolis-Hastings Algorithm for Mixture Model and its Weak Convergence. In Y. Lechevallier, G. Saporta (Eds.), Proceedings of COMPSTAT’2010, volume eBook (pp. 1175–1182).Google Scholar
  5. Kamatani, K. (2013a). Asymptotic properties of Monte Carlo strategies for cumulative link model (submitted).Google Scholar
  6. Kamatani, K. (2013b). Order of degeneracy of Markov chain Monte Carlo for categorical data (submitted).Google Scholar
  7. Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory (1st ed.). New York: SpringerGoogle Scholar
  8. Le Cam, L., Yang, G. L. (1988). On the preservation of local asymptotic normality under information loss. The Annals of Statistics, 16(2), 483–520.Google Scholar
  9. Meng, X.-L., van Dyk, D. (1999). Seeking efficient data augmentation schemes via conditional and marginal augmentation. Biometrika, 86(2), 301–320.Google Scholar
  10. Meyn, S. P., Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. London: Springer.Google Scholar
  11. Nielsen, S. F. (2000). The stochastic EM algorithm: Estimation and asymptotic results. Bernoulli, 6(3), 457–489.Google Scholar
  12. Nummelin, E. (1984). General irreducible Markov chains and nonnegative operators. In Cambridge Tracts in Mathematics (Vol. 83). Cambridge: Cambridge University Press.Google Scholar
  13. Roberts, G. O., Rosenthal, J. S. (2004). General state space markov chains and mcmc algorithms. Probability Surveys, 1, 20–71.Google Scholar
  14. Sahu, S. K., Roberts, G. O. (1999). On Convergence of the EM Algorithm and the Gibbs Sampler. Statistics and Computing, 9(1), 55–64.Google Scholar
  15. Svensson, I., de Luna, S. S. (2010). Asymptotic properties of a stochastic em algorithm for mixtures with censored data. Journal of Statistical Planning and Inference, 140, 117–127.Google Scholar
  16. Tierney, L. (1994). Markov Chains for Exploring Posterior Distributions (with discussion). The Annals of Statistics, 22(4), 1701–1762.Google Scholar

Copyright information

© The Institute of Statistical Mathematics, Tokyo 2013

Authors and Affiliations

  1. 1.Graduate School of Engineering ScienceOsaka UniversityToyonakaJapan

Personalised recommendations