Skip to main content
Log in

On learning strategies for evolutionary Monte Carlo

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

The real-parameter evolutionary Monte Carlo algorithm (EMC) has been proposed as an effective tool both for sampling from high-dimensional distributions and for stochastic optimization (Liang and Wong, 2001). EMC uses a temperature ladder similar to that in parallel tempering (PT; Geyer, 1991). In contrast with PT, EMC allows for crossover moves between parallel and tempered MCMC chains. In the context of EMC, we introduce four new moves, which enhance its efficiency as measured by the effective sample size. Secondly, we introduce a practical strategy for determining the temperature range and placing the temperatures in the ladder used in EMC and PT. Lastly, we prove the validity of the conditional sampling step of the snooker algorithm, a crossover move in EMC, which extends a result of Roberts and Gilks (1994).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Gelfand A.E. and Smith A.F.M. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association 85: 398–409.

    Article  MATH  MathSciNet  Google Scholar 

  • Gelman A., Carlin J.B., Rubin D.B., and Stern H.S. 2003. Bayesian Data Analysis. CRC Press.

  • Gelman A., Roberts G.O., and Gilks W.R. 1996. Efficient Metropolis jumping rules. In Bayesian Statistics 5—Proceedings of the Fifth Valencia International Meeting, pp. 599– 607.

  • Gelman A. and Rubin D.B. 1992. Inference from iterative simulation using multiple sequences (Disc: P483-501, 503-511). Statistical Science 7: 457–472.

    Google Scholar 

  • Geweke J.F. 1992. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (Disc: P189-193). In: Bernardo J.M., Berger J.O., Dawid, A.P., and Smith, A.F.M. (eds.), Bayesian Statistics 4. Proceedings of the Fourth Valencia International Meeting. Clarendon Press [Oxford University Press], pp. 169–188.

  • Geyer C.J. 1991. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics. Proceedings of the 23rd Symposium on the Interface. Interface Foundation of North America (Fairfax Station, VA). pp. 156–163.

  • Geyer C.J. 1992. Practical Markov Chain Monte Carlo (Disc: pp. 483–503). Statistical Science 7: 473–483.

    Google Scholar 

  • Geyer C.J. and Thompson E.A. 1995. Annealing Markov chain Monte Carlo with applications to ancestral inference. Journal of the American Statistical Association, 90: 909–920.

    Article  MATH  Google Scholar 

  • Gilks W.R., Roberts G.O., and George E.I. 1994. Adaptive direction sampling. The Statistician 43: 179–189.

    Article  Google Scholar 

  • Hastings K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57: 97–109.

    Article  MATH  Google Scholar 

  • Kong A., Liu J.S., and Wong W.H. 1994. Sequential imputations and Bayesian missing data problems. Journal of the American Statistical Association 89: 278–288.

    Article  MATH  Google Scholar 

  • Landau D.P. and Binder, K. 2000. A Guide to Monte Carlo simulations in statistical physics. Cambridge University Press.

  • Liang F. and Wong W.H. 2000. Evolutionary Monte Carlo: applications to CP model sampling and change point problem. Statistica Sinica 10(2): 317–342.

    MATH  Google Scholar 

  • Liang F. and Wong W.H. 2001. Real-parameter evolutionary Monte Carlo with applications to bayesian mixture models. Journal of the American Statistical Association 96: 653–666.

    Article  MATH  MathSciNet  Google Scholar 

  • Liu J.S. 2001. Monte Carlo Strategies in Scientific Computing. Springer-Verlag Inc.

  • Liu J.S., Liang F. and Wong, W.H. 2000. The multiple-try method and local optimization in metropolis sampling. Journal of the American Statistical Association 95(449): 121–134.

    Article  MATH  MathSciNet  Google Scholar 

  • Liu J.S. and Sabatti C. 2000. Generalised gibbs sampler and multigrid Monte Carlo for Bayesian computation. Biometrika 87(2): 353–369.

    Article  MATH  MathSciNet  Google Scholar 

  • Meng X.-L. and Wong W.H. 1996. Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. Statistica Sinica 6: 831–860.

    MATH  MathSciNet  Google Scholar 

  • Neal R.M. 1994. An improved acceptance procedure for the hybrid Monte Carlo algorithm. Journal of Computational Physics 111: 194–203.

    Article  MATH  MathSciNet  Google Scholar 

  • Priestley M.B. 1981. Spectral Analysis and Time Series. (Vol. 1): Univariate Series. Academic Press.

  • R Development Core Team. 2004. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, ISBN 3-900051-00-3.

  • Roberts G.O. and Gilks W.R. 1994. Convergence of adaptive direction sampling. Journal of Multivariate Analysis 49: 287–298.

    Article  MATH  MathSciNet  Google Scholar 

  • Warnes G.R. 2000. An Adaptive Markov Chain Monte Carlo mthod for efficient sampling from multimodal distributions. Thesis presented at the University of Washington Seattle.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gopi Goswami.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Goswami, G., Liu, J.S. On learning strategies for evolutionary Monte Carlo. Stat Comput 17, 23–38 (2007). https://doi.org/10.1007/s11222-006-9002-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-006-9002-y

Keywords

Navigation