Abstract
We develop an efficient computational algorithm that produces efficient Markov chain Monte Carlo (MCMC) transition matrices. The first level of efficiency is measured in terms of the number of operations needed to produce the resulting matrix. The second level of efficiency is evaluated in terms of the asymptotic variance of the resulting MCMC estimators. Results are first given for transition matrices in finite state spaces and then extended to transition kernels in more general settings.
Similar content being viewed by others
References
Billera LJ, Diaconis P (2001) A geometric interpretation of the Metropolis algorithm. Stat Sci 16:333–340
Diaconis P, Holmes S, Neal RM (2000) Analysis of a non-reversible Markov chain sampler. Ann Appl Prob 10:726–752
Doucet A, de Freitas N, Gordon N (2001) Sequential Monte Carlo methods in practice. Springer, Berlin Heidelberg New York
Duane S, Kennedy AD, Pendleton BJ, Roweth D (1987) Hybrid Monte Carlo. Phys Lett B 195:216–222
Green PJ (1995) Reversible jump MCMC computation and Bayesian model determination. Biometrika 82(4):711–732
Green PJ, Mira A (2001) Delayed rejection in reversible jump Metropolis-Hastings. Biometrika 88:1035–1053
Iba Y (2000) Population-based Monte Carlo algorithms, Trans. Jpn Soc Artif Intell 16(2):279–286
Mira A (2001) Efficiency increasing and stationarity preserving probability mass transfers for MCMC. Stat Prob Lett 54:405–411
Mira A (2002) Ordering and improving the performance of Monte Carlo Markov chains. Stat Sci 16:340–350
Mira A, Tierney L (2002) Efficiency and convergence properties of slice samplers. Scand J Stat 29(1):1–12
Neal RM (1994) An improved acceptance procedure for the hybrid Monte Carlo algorithm. J Comput Phys 111:194–203
Neal RM (2003) Slice sampling (with discussion). Ann Statist 31:705–767
Peskun PH (1973) Optimum Monte Carlo sampling using Markov chains. Biometrika 60:607–12
Roberts GO (1996) Markov chain concepts related to sampling algorithms. In: Gilks WR, Richardson ST, Spiegelhalter DJ (eds) Markov chain Monte Carlo in Practice. Chapman and Hall, London, pp 45–57
Roberts GO, Smith AFM (1994) Simple conditions for convergence of the Gibbs sampler and Hastings–Metropolis algorithms. Stochastic Process Appl 49:207–216
Sokal AD (1989) Monte Carlo methods in statistical mechanics: foundations and new algorithms. Cours de Troisième Cycle de la Physique en Suisse Romande, Lausanne
Stramer O, Tweedie RL (1999a) Langevin-type models i: diffusions with given stationary distributions, and their discretizations. Methodol Comput Appl Prob 1:283–306
Stramer O, Tweedie RL (1999b) Langevin-type models ii: self-targeting candidates for Hastings-Metropolis algorithms. Methodol Comput in Appl Prob 1:307–328
Tierney L (1994) Markov chains for exploring posterior distributions. Ann Stat 22:1701–1762
Tierney L, Mira A (1998) Some adaptive Monte Carlo methods for Bayesian inference. Stat Med 18:2507–2515
Acknowledgment
we thank Pieter Omtzigt for the insight given in the construction of the first-degree optimal matrix that appears in Sect. 4 and for discussing earlier versions of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Financial support from F.A.R. 2006, University of Insubria and from the grant MIUR 2005 “Modelli marginali per variabili categoriche con applicazioni all’analisi causale” are gratefully acknowledged.
Rights and permissions
About this article
Cite this article
Mira, A. Stationarity preserving and efficiency increasing probability mass transfers made possible. Computational Statistics 21, 509–522 (2006). https://doi.org/10.1007/s00180-006-0009-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-006-0009-9