Skip to main content
Log in

Posterior exploration based sequential Monte Carlo for global optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We propose a global optimization algorithm based on the sequential Monte Carlo (SMC) sampling framework. In this framework, the objective function is normalized to be a probabilistic density function (pdf), based on which a sequence of annealed target pdfs is designed to asymptotically converge on the set of global optimum. A sequential importance sampling procedure is performed to simulate the resulting targets and the maxima of the objective function are assessed from the yielded samples. The disturbing issue lies in the design of the importance sampling (IS) pdf, which crucially influences the IS efficiency. We propose an approach to design the IS pdf by embedding a posterior exploration (PE) procedure into each iteration of the SMC framework. The PE procedure can explore the important regions of the solution space supported by the target pdf. A byproduct of the PE procedure is an adaptive mechanism to design the annealing temperature schedule. We compare the proposed algorithm with related existing methods using a dozen benchmark functions. The result demonstrates the appealing properties of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Kennedy, J.: Particle swarm optimization. In: Encyclopedia of Machine Learning, pp. 760–766. Springer, Berlin (2010)

  2. Poli, R., Kennedy, J., Blackwell, T.: Particle swarm optimization. Swarm Intell. 1(1), 33–57 (2007)

    Article  Google Scholar 

  3. Whitley, D.: A genetic algorithm tutorial. Stat. Comput. 4(2), 65–85 (1994)

    Article  Google Scholar 

  4. Dorigo, M., Birattari, M.: Ant colony optimization. In: Encyclopedia of Machine Learning, pp. 36–39. Springer, Berlin (2010)

  5. Dorigo, M., Birattari, M., Stützle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)

    Article  Google Scholar 

  6. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P., et al.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  7. Ingber, L.: Simulated annealing: practice versus theory. Math. Comput. Model. 18(11), 29–57 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  8. Zhou, E., Chen, X.: Sequential Monte Carlo simulated annealing. J. Global Optim. 55(1), 101–124 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  9. Oh, M.-S., Berger, J.O.: Adaptive importance sampling in Monte Carlo integration. J. Stat. Comput. Simul. 41(3–4), 143–168 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  10. Cappé, O., Douc, R., Guillin, A., Marin, J.-M., Robert, C.P.: Adaptive importance sampling in general mixture classes. Stat. Comput. 18(4), 447–459 (2008)

    Article  MathSciNet  Google Scholar 

  11. Neal, R.M.: Annealed importance sampling. Stat. Comput. 11(2), 125–139 (2001)

    Article  MathSciNet  Google Scholar 

  12. Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. 68(3), 411–436 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  13. Hauschild, M., Pelikan, M.: An introduction and survey of estimation of distribution algorithms. Swarm Evolut. Comput. 1(3), 111–128 (2011)

    Article  Google Scholar 

  14. Auger, A., Hansen, N.: Tutorial CMA-ES: evolution strategies and covariance matrix adaptation. In: GECCO, pp. 827–848 (2012)

  15. Krause, O., Igel, C.: A more efficient rank-one covariance matrix update for evolution strategies. In: Proceedings of the 2015 ACM Conference on Foundations of Genetic Algorithms XIII, ACM, pp. 129–136 (2015)

  16. Beyer, H.-G., Finck, S.: On the design of constraint covariance matrix self-adaptation evolution strategies including a cardinality constraint. IEEE Trans. Evolut. Comput. 16(4), 578–596 (2012)

    Article  Google Scholar 

  17. Rubinstein, R.Y., Kroese, D.P.: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  18. De Boer, P.-T., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  19. Zhou, E., Fu, M.C., Marcus, S., et al.: Particle filtering framework for a class of randomized optimization algorithms. IEEE Trans. Autom. Control 59(4), 1025–1030 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  20. Chen, X., Zhou, E.: Population model-based optimization with sequential Monte Carlo. In: Proceedings of the 2013 Winter Simulation Conference, IEEE Press, pp. 1004–1015 (2013)

  21. Liu, B., Cheng, S., Shi, Y.: Particle filter optimization: a brief introduction. In: Advances in Swarm Intelligence, pp. 95–104 (2016)

  22. Liu, B.: Adaptive annealed importance sampling for multimodal posterior exploration and model selection with application to extrasolar planet detection. Astrophys. J. Suppl. Ser. 213(1), 14 (2014)

    Article  Google Scholar 

  23. Haario, H., Saksman, E., Tamminen, J.: Componentwise adaptation for high dimensional MCMC. Comput. Stat. 20(2), 265–273 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  24. Geweke, J.: Bayesian inference in econometric models using Monte Carlo integration. Econometrica 57(6), 1317–1339 (1989)

  25. McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions, vol. 382. Wiley, Hoboken (2007)

    MATH  Google Scholar 

  26. McLachlan, G., Peel, D.: Finite Mixture Models. Wiley, Hoboken (2004)

    MATH  Google Scholar 

  27. Liu, J.S., Chen, R.: Sequential Monte Carlo methods for dynamic systems. J. Am. Stat. Assoc. 93(443), 1032–1044 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  28. Doucet, A., Johansen, A.M.: A tutorial on particle filtering and smoothing: fifteen years later. Handb. Nonlinear Filter. 12(656–704), 3 (2009)

    MATH  Google Scholar 

  29. Trelea, I.C.: The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf. Process. Lett. 85(6), 317–325 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  30. Parsopoulos, K.E., Vrahatis, M.N.: Parameter selection and adaptation in unified particle swarm optimization. Math. Comput. Model. 46(1), 198–213 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  31. Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Trans. Evolut. Comput. 3(2), 82–102 (1999)

  32. Hajek, B.: Cooling schedules for optimal annealing. Math. Oper. Res. 13(2), 311–329 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  33. Picheny, V., Wanger, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscip. Optim. 10, 1007 (2013)

    Google Scholar 

Download references

Acknowledgements

This work was partly supported by the National Natural Science Foundation (NSF) of China (No. 61571238), the China Postdoctoral Science Foundation (Nos. 2015M580455 and 2016T90483) and Scientific and Technological Support Project (Society) of Jiangsu Province (No. BE2016776).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, B. Posterior exploration based sequential Monte Carlo for global optimization. J Glob Optim 69, 847–868 (2017). https://doi.org/10.1007/s10898-017-0543-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0543-8

Keywords

Navigation