Principled Design of Continuous Stochastic Search: From Theory to Practice

Chapter
Part of the Natural Computing Series book series (NCS)

Abstract

We derive a stochastic search procedure for parameter optimization from two first principles: (1) imposing the least prior assumptions, namely by maximum entropy sampling, unbiasedness and invariance; (2) exploiting all available information under the constraints imposed by (1). We additionally require that two of the most basic functions can be solved reasonably fast. Given these principles, two principal heuristics are used: reinforcing of good solutions and good steps (increasing their likelihood) and rendering successive steps orthogonal. The resulting search algorithm is the covariance matrix adaptation evolution strategy, CMA-ES, that coincides to a great extent to a natural gradient descent. The invariance properties of the CMA-ES are formalized, as are its maximum likelihood and stationarity properties. A small parameter study for a specific heuristic—deduced from the principles of reinforcing good steps and exploiting all information—is presented, namely for the cumulation of an evolution or search path. Experiments on two noisy functions are provided.

References

  1. 1.
    Y. Akimoto, Y. Nagata, I. Ono, S. Kobayashi, Bidirectional relation between CMA evolution strategies and natural evolution strategies, in Proceedings of the Parallel Problem Solving from Nature – PPSN XI, Part I, Kraków, ed. by R. Schaefer, C. Cotta, J. Kolodziej, G. Rudolph. Lecture Notes in Computer Science, vol. 6238 (Springer, 2010), pp. 154–163Google Scholar
  2. 2.
    D. Arnold, Optimal weighted recombination, in Foundations on Genetic Algorithms FOGA 2005, Aizu-Wakamatsu City. Lecture Notes in Computer Science, vol. 3469 (Springer, 2005), pp. 215–237Google Scholar
  3. 3.
    D. Arnold, Weighted multirecombination evolution strategies. Theor. Comput. Sci. 361(1), 18–37 (2006)CrossRefMATHGoogle Scholar
  4. 4.
    D.V. Arnold, N. Hansen, Active covariance matrix adaptation for the (1+1)-CMA-ES, in Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2010, Portland, 2010, pp. 385–392Google Scholar
  5. 5.
    L. Arnold, A. Auger, N. Hansen, Y. Ollivier, Information-geometric optimization algorithms: a unifying picture via invariance principles. arXiv:1106.3708 (Arxiv preprint) (2011)Google Scholar
  6. 6.
    A. Auger, N. Hansen, A restart CMA evolution strategy with increasing population size, in The 2005 IEEE International Congress on Evolutionary Computation (CEC 2005), Edinburgh, ed. by B. McKay et al., vol. 2, 2005, pp. 1769–1776Google Scholar
  7. 7.
    A.Auger, N. Hansen, Reconsidering the progress rate theory for evolution strategies in finite dimensions, in Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation GECCO, Seattle (ACM, 2006), pp. 445–452Google Scholar
  8. 8.
    A. Auger, N. Hansen, J. Zerpa, R. Ros, M. Schoenauer, Experimental comparisons of derivative free optimization algorithms, in 8th International Symposion on Experimental Algorithms SEA 2009, Dortmund. Lecture Notes in Computer Science, vol. 5526 (Springer, 2009), pp. 3–15Google Scholar
  9. 9.
    H.G. Beyer, The Theory of Evolution Strategies. Natural Computing Series (Springer, Heidelberg, 2001)CrossRefGoogle Scholar
  10. 10.
    D. Brockhoff, A. Auger, N. Hansen, D.V. Arnold, T. Hohm, Mirrored sampling and sequential selection for evolution strategies, in Parallel Problem Solving from Nature (PPSN XI), Kraków, ed. by R. Schaefer et al. LNCS, vol. 6238 (Springer, 2010) pp. 11–20Google Scholar
  11. 11.
    T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, J. Schmidhuber, Exponential natural evolution strategies, in Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2010, Portland, ed. by M. Pelikan, J. Branke (ACM, 2010) pp. 393–400Google Scholar
  12. 12.
    N. Hansen, The CMA evolution strategy: a tutorial, http://www.lri.fr/~hansen/cmatutorial.pdf
  13. 13.
    N. Hansen, Invariance, self-adaptation and correlated mutations in evolution strategies, in Proceedings of PPSN VI, Parallel Problem Solving from Nature, Paris, ed. by M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. Merelo, H.P. Schwefel (Springer, 2000), pp. 355–364Google Scholar
  14. 14.
    N. Hansen, An analysis of mutative σ-self-adaptation on linear fitness functions. Evol. Comput. 14(3), 255–275 (2006)CrossRefGoogle Scholar
  15. 15.
    N. Hansen, The CMA evolution strategy: a comparing review, in Towards a New Evolutionary Computation. Advances on Estimation of Distribution Algorithms, Hefei, ed. by J. Lozano, P. Larranaga, I. Inza, E. Bengoetxea (Springer, 2006), pp. 75–102Google Scholar
  16. 16.
    N. Hansen, Adaptive encoding for optimization. Research Report RR-6518, INRIA (2008), http://hal.inria.fr/inria-00275983/en/
  17. 17.
    N. Hansen, Adaptive encoding: how to render search coordinate system invariant, in Parallel Problem Solving from Nature (PPSN X), Dortmund, ed. by G. Rudolph et al. LNCS, 2008, pp. 205–214Google Scholar
  18. 18.
    N. Hansen, CMA-ES with two-point step-size adaptation. Technical Report, RR-6527, INRIA (2008), http://hal.inria.fr/inria-00276854/en/
  19. 19.
    N. Hansen, Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed, in Workshop Proceedings of the GECCO Genetic and Evolutionary Computation Conference, Montreal (ACM, 2009), pp. 2389–2395Google Scholar
  20. 20.
    N. Hansen, Benchmarking a BI-population CMA-ES on the BBOB-2009 noisy testbed, in Workshop Proceedings of the GECCO Genetic and Evolutionary Computation Conference, Montreal (ACM, 2009), pp. 2397–2402Google Scholar
  21. 21.
    N. Hansen, S. Kern, Evaluating the CMA evolution strategy on multimodal test functions, in Parallel Problem Solving from Nature PPSN VIII, Birmingham, ed. by X. Yao, et al. Lecture Notes in Computer Science, vol. 3242 (Springer, 2004), pp. 282–291Google Scholar
  22. 22.
    N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)CrossRefGoogle Scholar
  23. 23.
    N. Hansen, R. Ros, Benchmarking a weighted negative covariance matrix update on the BBOB-2010 noiseless testbed, in Genetic and Evolutionary Computation Conference, GECCO 2010, Companion Material, Portland, 2010, pp. 1673–1680Google Scholar
  24. 24.
    N. Hansen, A. Gawelczyk, A. Ostermeier, Sizing the population with respect to the local progress in (1, λ)-evolution strategies—a theoretical analysis, in IEEE International Conference on Evolutionary Computation, Perth, vol. 1, 1995, pp. 80–85Google Scholar
  25. 25.
    N. Hansen, S. Niederberger, L. Guzzella, P. Koumoutsakos, A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. IEEE Trans. Evol. Comput. 13(1), 180–197 (2009)CrossRefGoogle Scholar
  26. 26.
    N. Hansen, A. Auger, R. Ros, S. Finck, P. Pošík, Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009, in Workshop Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2010), Portland (ACM, 2010), pp. 1689–1696Google Scholar
  27. 27.
    J. Jägersküpper, Lower bounds for hit-and-run direct search, in Stochastic Algorithms: Foundations and Applications – SAGA 2007, Zurich, ed. by Yao, Xin et al. LNCS, vol. 4665 (Springer, Berlin/Heidelberg, 2007) pp. 118–129Google Scholar
  28. 28.
    J. Jägersküpper, Lower bounds for randomized direct search with isotropic sampling. Oper. Res. Lett. 36(3), 327–332 (2008)CrossRefMATHMathSciNetGoogle Scholar
  29. 29.
    G. Jastrebski, D. Arnold, Improving evolution strategies through active covariance matrix adaptation, in The 2006 IEEE International Congress on Evolutionary Computation (CEC 2006), Vancouver, 2006, pp. 2814–2821Google Scholar
  30. 30.
    M. Jebalia, A. Auger, N. Hansen, Log linear convergence and divergence of the scale-invariant (1+1)-ES in noisy environments. Algorithmica 59(3), 425–460 (2011)CrossRefMATHMathSciNetGoogle Scholar
  31. 31.
    T. Jones, S. Forrest, Fitness distance correlation as a measure of problem difficulty for genetic algorithms, in Proceedings of the 6th International Conference on Genetic Algorithms, ICGA, Pittsburgh, ed. by L.J. Eshelman (Morgan Kaufmann, 1995), pp. 184–192Google Scholar
  32. 32.
    A. Ostermeier, A. Gawelczyk, N. Hansen, Step-size adaptation based on non-local use of selection information, in Parallel Problem Solving from Nature PPSN IV, Jerusalem, ed. by Y. Davidor et al. Lecture Notes in Computer Science, vol. 866 (Springer, 1994), pp. 189–198Google Scholar
  33. 33.
    I. Rechenberg, Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (Frommann-Holzboog, Stuttgart, 1973)Google Scholar
  34. 34.
    R. Salomon, J.L. van Hemmen, Accelerating backpropagation through dynamic self-adaptation. Neural Netw. 9(4), 589–601 (1996)CrossRefGoogle Scholar
  35. 35.
    M. Schumer, K. Steiglitz, Adaptive step size random search. IEEE Trans. Autom. Control 13(3), 270–276 (1968)CrossRefGoogle Scholar
  36. 36.
    H.P. Schwefel, Numerical Optimization of Computer Models (Wiley, New York, 1981)MATHGoogle Scholar
  37. 37.
    T. Suttorp, N. Hansen, C. Igel, Efficient covariance matrix update for variable metric evolution strategies. Mach. Learn. 75(2), 167–197 (2009)CrossRefGoogle Scholar
  38. 38.
    O. Teytaud, H. Fournier, Lower bounds for evolution strategies using VC-dimension, in Parallel Problem Solving from Nature PPSN X, Dortmund. Lecture Notes in Computer Science, vol. 5199 (Springer, 2008) pp. 102–111Google Scholar
  39. 39.
    O. Teytaud, S. Gelly, General lower bounds for evolutionary algorithms, in Parallel Problem Solving from Nature PPSN IX, Reykjavik. Lecture Notes in Computer Science, vol. 4193 (Springer, 2006), pp. 21–31Google Scholar
  40. 40.
    D. Wierstra, T. Schaul, J. Peters, J. Schmidhuber, Natural evolution strategies, in IEEE Congress on Evolutionary Computation, Hong Kong (IEEE, 2008), pp. 3381–3387Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.INRIA Saclay – Île-de-FranceOrsayFrance

Personalised recommendations