Abstract
Sampling from a truncated distribution is difficult. There are currently two major methods proposed for solving this task. The first proposed solution is a random-walk MCMC algorithm. Although it eventually gives the correct distribution, it can be very slow in multi-modal distributions. The second approach called the ellipsoid method is practically more efficient for problems in which users have good prior information, but a correctness is not guaranteed. In this paper, we present a framework which can unify these two approaches. The key idea is to merge both methods into a single Markov chain using a trick called Metropolis-coupled MCMC. Once merged, they can validly exchange information to each other. Although the chain constructed from the ellipsoid approach cannot be proven to be correct, it usually rapidly converges to a useful stationary distribution, and its information can help the other chain constructed by the random-walk approach to converge faster to the correct distribution.
Similar content being viewed by others
References
Adams TM, Nobel AB (1998) On density estimation from ergodic processes. Ann Probab 26(2): 794–804
Bishop CM (2006) Pattern recognition and machine learning. Springer, New York
Devroye L (1986) Non-uniform random variate generation. Springer, New York
Feroz F, Hobson M (2007) Multi-modal nested sampling Submitted to MNRAS. Available at: http://arxiv.org/abs/0704.3704
Gelfand AE, Sahu SK (1994) On Markov chain Monte Carlo acceleration. J Comput Graph Stat 3: 261–267
Gelman A, Rubin DB (1992) Inference from iterative simulation using multiple sequence (with discussions). Stat Sci 7: 457–511
Geyer CJ (1991) Markov chain Monte Carlo maximum likelihood. In: Computing science and statistics: proceeding of the 23rd symposium on the interface, pp 156–163
Geyer CJ (1992) Practical markov chain monte carlo (with discussions). Stat Sci 7(4): 473–511
Geyer CJ, Thompson EA (1995) Annealing Markov chain Monte Carlo with applications to ancestral inference. J Am Stat Assoc 90(431): 909–920
Gilks WR, Robert GO, Sahu SK (1998) Adaptive Markov chain Monte Carlo through regeneration. J Am Stat Assoc 93(443): 1045–1054
MacKay DJC (2003) Information theory, inference and learning algorithms. Cambridge University Press, Cambridge
Mengersen KL, Robert CP, Guihenneuc-Jouyaux C (1999) MCMC convergence diagnostics: a review. Bayesian Stat 6: 415–440
Mukherjee P, Parkinson D, Liddle AR (2006) A nested sampling algorithm for cosmological model selection. Astrophys J 638: L51–L54
Murray I, Ghahramani Z, MacKay DJC, Skilling J (2005) Nested sampling for Potts models. In: Neural information processing systems. Vol. 18
Neal RM (2003) Slice sampling (with discussions). Ann Stat 31: 705–767
Pelleg D, Moore A (2000) X-means: extending K-means with efficient estimation of the number of clusters. In: Proceedings 17th international conference on machine learning, pp 727–734
Ramussen CE (2003) Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals. In: Bernardo JM, Bayarri S, Berger JO, Dawid AP, Heckerman D, Smith AFM, West M (eds) Bayesian statistics 7. Oxford University Press, Oxford
Sahu SK, Zhigljavsky AA (2003) Self regenerative Markov chain Monte Carlo with adaptation. Bernoulli 9: 395–422
Shaw R, Bridges M, Hobson MP (2007) Clustered nested sampling: efficient Bayesian inference for cosmology. MNRAS (in press). Available at: http://lanl.arXiv.org/abs/astro-ph/0701867v1
Skilling J (2006) Nested sampling for general Bayesian computation. Bayesian Anal 1(4): 833–860
Welling M, Kurihara K (2006) Bayesian K-means as a “maximization-expectation” algorithm. In: Proceedings of the sixth SIAM international conference on data mining, April 20–22, 2006 Bethesda, MD, USA. SIAM
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chatpatanasiri, R. How to sample from a truncated distribution if you must. Artif Intell Rev 31, 1 (2009). https://doi.org/10.1007/s10462-009-9121-x
Published:
DOI: https://doi.org/10.1007/s10462-009-9121-x