Empirical Investigation of Simplified Step-Size Control in Metaheuristics with a View to Theory

  • Jens Jägersküpper
  • Mike Preuss
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5038)


Randomized direct-search methods for the optimization of a function f:ℝn→ℝ given by a black box for f-evaluations are investigated. We consider the cumulative step-size adaptation (CSA) for the variance of multivariate zero-mean normal distributions. Those are commonly used to sample new candidate solutions within metaheuristics, in particular within the CMA Evolution Strategy (CMA-ES), a state-of-the-art direct-search method. Though the CMA-ES is very successful in practical optimization, its theoretical foundations are very limited because of the complex stochastic process it induces. To forward the theory on this successful method, we propose two simplifications of the CSA used within CMA-ES for step-size control. We show by experimental and statistical evaluation that they perform sufficiently similarly to the original CSA (in the considered scenario), so that a further theoretical analysis is in fact reasonable. Furthermore, we outline in detail a probabilistic/theoretical runtime analysis for one of the two CSA-derivatives.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hansen, N., Ostermeier, A.: Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In: Proc. IEEE Int’l Conference on Evolutionary Computation (ICEC), pp. 312–317 (1996)Google Scholar
  2. 2.
    Hansen, N.: List of references to various applications of CMA-ES (2008), http://www.bionik.tu-berlin.de/user/niko/cmaapplications.pdf
  3. 3.
    Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) Evolutionary Algorithm. Theoretical Computer Science 276, 51–82 (2002)CrossRefMathSciNetMATHGoogle Scholar
  4. 4.
    Jägersküpper, J.: Algorithmic analysis of a basic evolutionary algorithm for continuous optimization. Theoretical Computer Science 379, 329–347 (2007)CrossRefMathSciNetMATHGoogle Scholar
  5. 5.
    Rechenberg, I.: Cybernetic solution path of an experimental problem. Royal Aircraft Establishment (1965)Google Scholar
  6. 6.
    Schwefel, H.-P.: Numerical Optimization of Computer Models. Wiley, New York (1981)MATHGoogle Scholar
  7. 7.
    Jägersküpper, J.: Analysis of a simple evolutionary algorithm for minimization in Euclidean spaces. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 1068–1079. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Jägersküpper, J.: How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms. Theoretical Computer Science 361, 38–56 (2005)CrossRefGoogle Scholar
  9. 9.
    Beyer, H.-G.: The Theory of Evolution Strategies. Springer, Heidelberg (2001)Google Scholar
  10. 10.
    Jägersküpper, J.: Probabilistic runtime analysis of \((1\overset{+}{,}\lambda)\) ES using isotropic mutations. In: Proc. 2006 Genetic and Evolutionary Computation Conference (GECCo), pp. 461–468. ACM Press, New York (2006)CrossRefGoogle Scholar
  11. 11.
    Beyer, H.-G., Arnold, D.V.: Performance analysis of evolutionary optimization with cumulative step-length adaptation. IEEE Transactions on Automatic Control, 617–622 (2004)Google Scholar
  12. 12.
    Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: New perspectives on some classical and modern methods. SIAM Review 45, 385–482 (2004)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jens Jägersküpper
    • 1
  • Mike Preuss
    • 1
  1. 1.Fakultät für InformatikTechnische Universität DortmundDortmundGermany

Personalised recommendations