Skip to main content
Log in

Adaptive independence samplers

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Akaike, H.: A new look at the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  • Andolfatto, P.: Adaptive evolution of non-coding DNA in drosophila. Nature 437, 1149–1152 (2005)

    Article  Google Scholar 

  • Andrieu, C., Moulines, E.: On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Aprobab. 16(3), 1462–1505 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  • Atchade, Y.F., Rosenthal, J.S.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11, 815–828 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  • Ter Braak, C.J.F.: A Markov chain Monte Carlo version of the genetic algorithm differential evolution: easy Bayesian computing for real parameter spaces. Stat. Comput. 16 (2006)

  • Chauveau, D., Vandekerkhove, P.: Improving convergence of the Hastings-Metropolis algorithm with an adaptive proposal. Scand. J. Stat. 29(1), 13–29 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  • Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 39(1), 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  • Douc, R., Guillin, A., Marin, J.-M., Robert, C.P.: Convergence of adaptive mixtures of importance sampling schemes. Ann. Stat. 35(1), 420–448 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  • Gasemyr, J.: On an adaptive version of the Metropolis–Hastings algorithm with independent proposal distribution. Scand. J. Stat. 30(1), 159–173 (2003)

    Article  MathSciNet  Google Scholar 

  • Gelfand, A.E., Sahu, S.K.: On Markov chain Monte Carlo acceleration. J. Comput. Graph. Stat. 3(3), 261–276 (1994)

    Article  MathSciNet  Google Scholar 

  • Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Bayesian Data Analysis, 2nd edn. Chapman and Hall, London (2003)

    Google Scholar 

  • Gelman, A.G., Roberts, G.O., Gilks, W.R.: Efficient metropolis jumping rules. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian Statistics V, pp. 599–608. Oxford Univ. Press, New York (1996)

    Google Scholar 

  • Gilks, W., Roberts, G., George, E.: Adaptive direction sampling. Statistician 43(1), 179–189 (1994)

    Article  Google Scholar 

  • Gilks, W., Roberts, G., Sahu, S.: Adaptive Markov chain Monte Carlo through regeneration. J. Am. Stat. Assoc. 93, 1045–1054 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  • Haario, H., Saksman, E., Tamminen, J.: Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Stat. 14, 375–395 (1999)

    Article  MATH  Google Scholar 

  • Haario, H., Saksman, E., Tamminen, J.: An adaptive Metropolis algorithm. Bernoulli 7(2), 223–242 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  • Haario, H., Saksman, E., Tamminen, J.: Componentwise adaptation for high dimensional MCMC. Comput. Stat. 20, 265–273 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  • Hastings, W.K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109 (1970)

    Article  MATH  Google Scholar 

  • Holden, L.: Adaptive chains. Technical report, Norwegian Computing Centre, P.O. Box 114 Blindern, N-0314, Oslo, Norway (2000)

  • Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22, 79–86 (1951)

    Article  MATH  MathSciNet  Google Scholar 

  • Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, Berlin (2001)

    MATH  Google Scholar 

  • McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions. Wiley, New York (1997)

    MATH  Google Scholar 

  • Mengersen, K., Robert, C.P.: Iid sampling using self-avoiding population Monte Carlo: the pinball sampler. In: Bernardo, J.M., Bayarri, M.J., Berger, J.O., Dawid, A.P., Heckerman, D., Smith, A.F.M., West, M. (eds.) Bayesian Statistics 7, pp. 277–292. Clarendon, Oxford (2003)

    Google Scholar 

  • Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H.: Equations of state calculations by fast computing mashines. J. Chem. Phys. 21, 1087–1092 (1953)

    Article  Google Scholar 

  • Mykland, P., Tierney, L., Yu, B.: Regeneration in Markov chain samplers. J. Am. Stat. Assoc. 90, 233–241 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  • Pasarica, C., Gelman, A.: Adaptively scaling the Metropolis algorithm using expected squared jumped distance. Technical report, Department of Statistics, Columbia University (2003)

  • Rubinstein, R.Y., Kroese, D.P.: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, New York (2004)

    MATH  Google Scholar 

  • Sahu, S.K., Zhigljavsky, A.A.: Self-regenerative Markov chain Monte Carlo with adaptation. Bernoulli 9, 395–422 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  • Schwarz, G.: Estimating the dimension of a model. Ann. Stat. 6(2), 461–464 (1978)

    Article  MATH  Google Scholar 

  • Tierney, L., Mira, A.: Some adaptive Monte Carlo methods for Bayesian inference. Stat. Medicine 18, 2507–2515 (1999)

    Article  Google Scholar 

  • Warnes, G.R.: The normal kernel coupler: An adaptive Markov chain Monte Carlo method for efficiently sampling from multi-modal distributions. Technical Report 39, Department of Statistics, University of Washington (2003)

  • Waterston, R.H., Lindblad-Toh, K., Birney, E., Rogers, J., Abril, J.F., et al.: Initial sequencing and comparative analysis of the mouse genome. Nature 420, 520–562 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George Y. Sofronov.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Keith, J.M., Kroese, D.P. & Sofronov, G.Y. Adaptive independence samplers. Stat Comput 18, 409–420 (2008). https://doi.org/10.1007/s11222-008-9070-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-008-9070-2

Keywords

Navigation