Evaluating the CMA Evolution Strategy on Multimodal Test Functions

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3242)


In this paper the performance of the CMA evolution strategy with rank-μ-update and weighted recombination is empirically investigated on eight multimodal test functions. In particular the effect of the population size λ on the performance is investigated. Increasing the population size remarkably improves the performance on six of the eight test functions. The optimal population size takes a wide range of values, but, with one exception, scales sub-linearly with the problem dimension. The global optimum can be located in all but one function. The performance for locating the global optimum scales between linear and cubic with the problem dimension. In a comparison to state-of-the-art global search strategies the CMA evolution strategy achieves superior performance on multimodal, non-separable test functions without intricate parameter tuning.


Large Population Size Problem Dimension Covariance Matrix Adaptation Rastrigin Function Initial Step Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9, 159–195 (2001)CrossRefGoogle Scholar
  2. 2.
    Hansen, N., Ostermeier, A.: Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In: Proceedings of the 1996 IEEE Conference on Evolutionary Computation (ICEC 1996), pp. 312–317 (1996)Google Scholar
  3. 3.
    Müller, S.D., Hansen, N., Koumoutsakos, P.: Increasing the serial and the parallel performance of the CMA-evolution stategy with large populations. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, p. 422. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11, 1–18 (2003)CrossRefGoogle Scholar
  5. 5.
    Kern, S., Müller, S.D., Hansen, N., Büche, D., Ocenasek, J., Koumoutsakos, P.: Learning probability distributions in continuous evolutionary algorithms – a comparative review. Natural Computing 3, 77–112 (2004)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Büche, D.: Personal communication (2003)Google Scholar
  7. 7.
    Storn, R., Price, K.: Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Opt. 11, 341–359 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Ohkura, K., Matsumura, Y., Ueda, K.: Robust evolution strategies. Lect. Notes Comput. Sc. 1585, 10–17 (1999)CrossRefGoogle Scholar
  9. 9.
    Addis, B., Locatelli, M., Schoen, F.: Local optima smoothing for global optimization. Technical report dsi 5–2003, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  1. 1.Computational Science and Engineering Laboratory (CSE Lab)Swiss Federal Institute of Technology (ETH) ZurichSwitzerland

Personalised recommendations