Skip to main content

Variance Reduction for Better Sampling in Continuous Domains

  • Conference paper
  • First Online:
Parallel Problem Solving from Nature – PPSN XVI (PPSN 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12269))

Included in the following conference series:

Abstract

Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm uses a sample drawn according to some probability distribution for approximating the location of an optimum. Recent papers have shown that the optimal search distribution, used for the sampling, might be more peaked around the center of the distribution than the prior distribution modelling our uncertainty about the location of the optimum.We confirm this statement, provide explicit values for this reshaping of the search distribution depending on the population size \(\lambda \) and the dimension d, and validate our results experimentally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Arxiv version  [20] of the present document includes bigger plots and the appendices.

  2. 2.

    This requires knowledge of \(\inf _{x} f(x)\), which may not be available in real-world applications. In this case, without loss of generality (this is just for the sake of plotting regret values), the infimum can be replaced by an empirical minimum. In all applications considered in this work the value of \(\inf _x f(x)\) is known.

  3. 3.

    Detailed results for individual settings are available at http://dl.fbaipublicfiles.com/nevergrad/allxps/list.html.

References

  1. Atanassov, E.I.: On the discrepancy of the Halton sequences. Math. Balkanica (NS) 18(1–2), 15–32 (2004)

    MathSciNet  MATH  Google Scholar 

  2. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)

    MathSciNet  MATH  Google Scholar 

  3. Bossek, J., Doerr, C., Kerschke, P.: Initial design strategies and their effects on sequential model-based optimization. In: Proceeding of the Genetic and Evolutionary Computation Conference (GECCO 2020). ACM (2020). https://arxiv.org/abs/2003.13826

  4. Bossek, J., Kerschke, P., Neumann, A., Neumann, F., Doerr, C.: One-shot decision-making with and without surrogates. CoRR abs/1912.08956 (2019). http://arxiv.org/abs/1912.08956

  5. Bubeck, S., Munos, R., Stoltz, G.: Pure exploration in multi-armed bandits problems. In: Gavaldà, R., Lugosi, G., Zeugmann, T., Zilles, S. (eds.) ALT 2009. LNCS (LNAI), vol. 5809, pp. 23–37. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04414-4_7

    Chapter  Google Scholar 

  6. Cauwet, M.L., et al.: Fully parallel hyperparameter search: reshaped space-filling. arXiv preprint arXiv:1910.08406 (2019)

  7. Dick, J., Pillichshammer, F.: Digital Nets and Sequences. Cambridge University Press, Cambridge (2010)

    Book  Google Scholar 

  8. Ergezer, M., Sikder, I.: Survey of oppositional algorithms. In: 14th International Conference on Computer and Information Technology (ICCIT 2011), pp. 623–628 (2011)

    Google Scholar 

  9. Esmailzadeh, A., Rahnamayan, S.: Enhanced differential evolution using center-based sampling. In: 2011 IEEE Congress of Evolutionary Computation (CEC), pp. 2641–2648 (2011)

    Google Scholar 

  10. Esmailzadeh, A., Rahnamayan, S.: Center-point-based simulated annealing. In: 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4 (2012)

    Google Scholar 

  11. Feurer, M., Springenberg, J.T., Hutter, F.: Initializing Bayesian hyperparameter optimization via meta-learning. In: AAAI (2015)

    Google Scholar 

  12. Halton, J.: On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer. Math. 2, 84–90 (1960). http://eudml.org/doc/131448

  13. Hammersley, J.M.: Monte-Carlo methods for solving multivariate problems. Ann. N. Y. Acad. Sci. 86(3), 844–874 (1960)

    Article  Google Scholar 

  14. James, W., Stein, C.: Estimation with quadratic loss. In: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Contributions to the Theory of Statistics, vol. 1, pp. 361–379. University of California Press (1961). https://projecteuclid.org/euclid.bsmsp/1200512173

  15. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998)

    Article  MathSciNet  Google Scholar 

  16. Maaranen, H., Miettinen, K., Mäkelä, M.: Quasi-random initial population for genetic algorithms. Comput. Math. Appl. 47(12), 1885–1895 (2004)

    Article  MathSciNet  Google Scholar 

  17. Mahdavi, S., Rahnamayan, S., Deb, K.: Center-based initialization of cooperative co-evolutionary algorithm for large-scale optimization. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 3557–3565 (2016)

    Google Scholar 

  18. Matoušek, J.: Geometric Discrepancy, 2nd edn. Springer, Berlin (2010)

    MATH  Google Scholar 

  19. McKay, M.D., Beckman, R.J., Conover, W.J.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2), 239–245 (1979)

    MathSciNet  MATH  Google Scholar 

  20. Meunier, L., Doerr, C., Rapin, J., Teytaud, O.: Variance reduction for better sampling in continuous domains (2020)

    Google Scholar 

  21. Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, Philadelphia (1992)

    Book  Google Scholar 

  22. Rahnamayan, S., Wang, G.G.: Center-based sampling for population-based algorithms. In: 2009 IEEE Congress on Evolutionary Computation, pp. 933–938, May 2009. https://doi.org/10.1109/CEC.2009.4983045

  23. Rapin, J., Teytaud, O.: Nevergrad - a gradient-free optimization platform (2018). https://GitHub.com/FacebookResearch/Nevergrad

  24. Stein, C.: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In: Proceeding of the Third Berkeley Symposium on Mathematical Statistics and Probability, Contributions to the Theory of Statistics, vol. 1, pp. 197–206. University of California Press (1956). https://projecteuclid.org/euclid.bsmsp/1200501656

  25. Storn, R., Price, K.: Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  Google Scholar 

  26. Surry, P.D., Radcliffe, N.J.: Inoculation to initialise evolutionary search. In: Fogarty, T.C. (ed.) AISB EC 1996. LNCS, vol. 1143, pp. 269–285. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0032789

    Chapter  Google Scholar 

  27. Teytaud, O., Gelly, S., Mary, J.: On the ultimate convergence rates for isotropic algorithms and the best choices among various forms of isotropy. In: Runarsson, T.P., Beyer, H.-G., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 32–41. Springer, Heidelberg (2006). https://doi.org/10.1007/11844297_4

    Chapter  Google Scholar 

  28. Yang, X., Cao, J., Li, K., Li, P.: Improved opposition-based biogeography optimization. In: The Fourth International Workshop on Advanced Computational Intelligence, pp. 642–647 (2011)

    Google Scholar 

  29. Zhang, A., Zhou, Y.: On the non-asymptotic and sharp lower tail bounds of random variables (2018)

    Google Scholar 

Download references

Acknowledgements

This work was initiated at Dagstuhl seminar 19431 on Theory of Randomized Optimization Heuristics.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laurent Meunier .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Meunier, L., Doerr, C., Rapin, J., Teytaud, O. (2020). Variance Reduction for Better Sampling in Continuous Domains. In: Bäck, T., et al. Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science(), vol 12269. Springer, Cham. https://doi.org/10.1007/978-3-030-58112-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58112-1_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58111-4

  • Online ISBN: 978-3-030-58112-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics