Statistics and Computing

, Volume 18, Issue 4, pp 409–420 | Cite as

Adaptive independence samplers

  • Jonathan M. Keith
  • Dirk P. Kroese
  • George Y. Sofronov


Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.


Markov chain Monte Carlo Generalized Markov sampler Adaptive methods Cross-entropy Comparative genomics 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Akaike, H.: A new look at the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974) zbMATHCrossRefMathSciNetGoogle Scholar
  2. Andolfatto, P.: Adaptive evolution of non-coding DNA in drosophila. Nature 437, 1149–1152 (2005) CrossRefGoogle Scholar
  3. Andrieu, C., Moulines, E.: On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Aprobab. 16(3), 1462–1505 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  4. Atchade, Y.F., Rosenthal, J.S.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11, 815–828 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  5. Ter Braak, C.J.F.: A Markov chain Monte Carlo version of the genetic algorithm differential evolution: easy Bayesian computing for real parameter spaces. Stat. Comput. 16 (2006) Google Scholar
  6. Chauveau, D., Vandekerkhove, P.: Improving convergence of the Hastings-Metropolis algorithm with an adaptive proposal. Scand. J. Stat. 29(1), 13–29 (2002) zbMATHCrossRefMathSciNetGoogle Scholar
  7. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 39(1), 1–38 (1977) zbMATHMathSciNetGoogle Scholar
  8. Douc, R., Guillin, A., Marin, J.-M., Robert, C.P.: Convergence of adaptive mixtures of importance sampling schemes. Ann. Stat. 35(1), 420–448 (2007) zbMATHCrossRefMathSciNetGoogle Scholar
  9. Gasemyr, J.: On an adaptive version of the Metropolis–Hastings algorithm with independent proposal distribution. Scand. J. Stat. 30(1), 159–173 (2003) CrossRefMathSciNetGoogle Scholar
  10. Gelfand, A.E., Sahu, S.K.: On Markov chain Monte Carlo acceleration. J. Comput. Graph. Stat. 3(3), 261–276 (1994) CrossRefMathSciNetGoogle Scholar
  11. Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Bayesian Data Analysis, 2nd edn. Chapman and Hall, London (2003) Google Scholar
  12. Gelman, A.G., Roberts, G.O., Gilks, W.R.: Efficient metropolis jumping rules. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian Statistics V, pp. 599–608. Oxford Univ. Press, New York (1996) Google Scholar
  13. Gilks, W., Roberts, G., George, E.: Adaptive direction sampling. Statistician 43(1), 179–189 (1994) CrossRefGoogle Scholar
  14. Gilks, W., Roberts, G., Sahu, S.: Adaptive Markov chain Monte Carlo through regeneration. J. Am. Stat. Assoc. 93, 1045–1054 (1998) zbMATHCrossRefMathSciNetGoogle Scholar
  15. Haario, H., Saksman, E., Tamminen, J.: Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Stat. 14, 375–395 (1999) zbMATHCrossRefGoogle Scholar
  16. Haario, H., Saksman, E., Tamminen, J.: An adaptive Metropolis algorithm. Bernoulli 7(2), 223–242 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  17. Haario, H., Saksman, E., Tamminen, J.: Componentwise adaptation for high dimensional MCMC. Comput. Stat. 20, 265–273 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  18. Hastings, W.K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109 (1970) zbMATHCrossRefGoogle Scholar
  19. Holden, L.: Adaptive chains. Technical report, Norwegian Computing Centre, P.O. Box 114 Blindern, N-0314, Oslo, Norway (2000) Google Scholar
  20. Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22, 79–86 (1951) zbMATHCrossRefMathSciNetGoogle Scholar
  21. Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, Berlin (2001) zbMATHGoogle Scholar
  22. McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions. Wiley, New York (1997) zbMATHGoogle Scholar
  23. Mengersen, K., Robert, C.P.: Iid sampling using self-avoiding population Monte Carlo: the pinball sampler. In: Bernardo, J.M., Bayarri, M.J., Berger, J.O., Dawid, A.P., Heckerman, D., Smith, A.F.M., West, M. (eds.) Bayesian Statistics 7, pp. 277–292. Clarendon, Oxford (2003) Google Scholar
  24. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H.: Equations of state calculations by fast computing mashines. J. Chem. Phys. 21, 1087–1092 (1953) CrossRefGoogle Scholar
  25. Mykland, P., Tierney, L., Yu, B.: Regeneration in Markov chain samplers. J. Am. Stat. Assoc. 90, 233–241 (1995) zbMATHCrossRefMathSciNetGoogle Scholar
  26. Pasarica, C., Gelman, A.: Adaptively scaling the Metropolis algorithm using expected squared jumped distance. Technical report, Department of Statistics, Columbia University (2003) Google Scholar
  27. Rubinstein, R.Y., Kroese, D.P.: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, New York (2004) zbMATHGoogle Scholar
  28. Sahu, S.K., Zhigljavsky, A.A.: Self-regenerative Markov chain Monte Carlo with adaptation. Bernoulli 9, 395–422 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  29. Schwarz, G.: Estimating the dimension of a model. Ann. Stat. 6(2), 461–464 (1978) zbMATHCrossRefGoogle Scholar
  30. Tierney, L., Mira, A.: Some adaptive Monte Carlo methods for Bayesian inference. Stat. Medicine 18, 2507–2515 (1999) CrossRefGoogle Scholar
  31. Warnes, G.R.: The normal kernel coupler: An adaptive Markov chain Monte Carlo method for efficiently sampling from multi-modal distributions. Technical Report 39, Department of Statistics, University of Washington (2003) Google Scholar
  32. Waterston, R.H., Lindblad-Toh, K., Birney, E., Rogers, J., Abril, J.F., et al.: Initial sequencing and comparative analysis of the mouse genome. Nature 420, 520–562 (2002) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Jonathan M. Keith
    • 1
  • Dirk P. Kroese
    • 2
  • George Y. Sofronov
    • 3
  1. 1.School of Mathematical SciencesQueensland University of TechnologyBrisbaneAustralia
  2. 2.Department of MathematicsThe University of QueenslandBrisbaneAustralia
  3. 3.School of Mathematics and Applied StatisticsUniversity of WollongongWollongongAustralia

Personalised recommendations