Skip to main content

The CMA Evolution Strategy: A Comparing Review

  • Chapter
Towards a New Evolutionary Computation

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 192))

Summary

Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Baluja and R. Caruana. Removing the genetics from standard genetic algorithm. In A. Prieditis and S. Russell, editors, Proceedings of the International Conference on Machine Learning, pp. 38–46. Morgan Kaufmann, 1995.

    Google Scholar 

  2. H.G. Beyer. The Theory of Evolution Strategies. Springer, 2001.

    Google Scholar 

  3. H.G. Beyer and D. Arnold. Qualms regarding the optimality of cumulative path length control in CSA/CMA-evolution strategies. Evolutionary Computation, 11(1):19–28, 2003.

    Article  MathSciNet  Google Scholar 

  4. P.A.N. Bosman and D. Thierens. Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature — PPSN VI. Lecture Notes in Computer Science 1917, pp. 767–776, 2000.

    Google Scholar 

  5. M. Gallagher and M. Frean. Population-based continuous optimization and probabilistic modeling. Technical Report MG-1-2001, echnical report, School of Information Technology and Electrical Engineering, University of Queensland, 2001.

    Google Scholar 

  6. N. Hansen. Verallgemeinerte individuelle Schrittweitenregelung in der Evolutionsstrategie. Mensch und Buch Verlag, 1998.

    Google Scholar 

  7. N. Hansen. Invariance, self-adaptation and correlated mutations in evolution strategies. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature-PPSN VI, pp. 355–364. Springer, 2000.

    Google Scholar 

  8. N. Hansen and S. Kern. Evaluating the CMA evolution strategy on multimodal test functions. In Xin Yao et al., editor, Parallel Problem Solving from Nature-PPSN VIII, pp. 282–291. Springer, 2004.

    Google Scholar 

  9. N. Hansen, S.D. Müller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1):1–18, 2003.

    Article  Google Scholar 

  10. N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of the 1996 IEEE Conference on Evolutionary Computation (ICEC’ 96), pp. 312–317, 1996.

    Google Scholar 

  11. N. Hansen and A. Ostermeier. Convergence properties of evolution strategies with the derandomized covariance matrix adaptation: The (μ/μI, λ)-CMA-ES. In Proceedings of the 5th European Congresson Intelligent Techniques and Soft Computing, pp. 650–654, 1997.

    Google Scholar 

  12. N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.

    Article  Google Scholar 

  13. K. Deb and H.G. Beyer. On self-adaptive features in real-parameter evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 5(3):250–270, 2001.

    Article  Google Scholar 

  14. S. Kern, S.D. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos. Learning probability distributions in continuous evolutionary algorithms — a comparative review. Natural Computing, 3:77–112, 2004.

    Article  MATH  MathSciNet  Google Scholar 

  15. P. Larrañaga. A review on estimation of distribution algorithms. In P. Larrañaga and J. A. Lozano, editors, Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation, pp. 80–90. Kluwer Academic Publishers, 2002.

    Google Scholar 

  16. P. Larrañaga, J. A. Lozano, and E. Bengoetxea. Estimation of distribution algorithms based on multivariate normal and Gaussian networks. Technical Report KZAA-IK-1-01, Dept. of Computer Science and Artificial Intelligence, University of the Basque Country, 2001.

    Google Scholar 

  17. I. Rechenberg. Evolutionsstrategie’ 94. Frommann-Holzboog, Stuttgart, Germany, 1994.

    Google Scholar 

  18. S. Rudlof and M. Köppen. Stochastic hill climbing by vectors of normal distributions. In Proceedings of the First Online Workshop on Soft Computing (WSC1), 1996. Nagoya, Japan.

    Google Scholar 

  19. H.-P. Schwefel. Evolution and Optimum Seeking. John Wiley & Sons, Inc., 1995.

    Google Scholar 

  20. M. Sebag and A. Ducoulombier. Extending population-based incremental learning to continuos search spaces. In Parallel Problem Solving from Nature-PPSN V, pp. 418–427. Springer-Verlag, 1998. Berlin.

    Google Scholar 

  21. B. Yuan and M. Gallagher. Playing in continuous spaces: Some analysis and extension of population-based incremental learning. In Sarkar et al., editor, Proc. Congress on Evolutionary Computation (CEC), pp. 443–450, 2003.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds) Towards a New Evolutionary Computation. Studies in Fuzziness and Soft Computing, vol 192. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32494-1_4

Download citation

  • DOI: https://doi.org/10.1007/3-540-32494-1_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-29006-3

  • Online ISBN: 978-3-540-32494-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics