Multi-stage Constraint Surrogate Models for Evolution Strategies

  • Jendrik Poloczek
  • Oliver Kramer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8736)

Abstract

Real-parameter blackbox optimization using evolution strategies (ES) is often applied when the fitness function or its characteristics are not explicitly given. The evaluation of fitness and feasibility might be expensive. In the past, different surrogate model approaches have been proposed to address this issue. In our previous work, local feasibility surrogate models have been proposed, which are trained with already evaluated individuals. This tightly coupled interdependency with the optimization process leads to complex side effects when applied with meta-heuristics like the covariance matrix adaption ES (CMA-ES). The objective of this paper is to propose a new type of constraint surrogate model, which uses the concept of active learning in multiple stages for the estimation of the constraint boundary for a stage-depending accuracy. The underlying linear model of the constraint boundary is estimated in every stage with binary search. In the optimization process the pre-selection scheme is employed to save constraint function calls. The surrogate model is evaluated on a simple adaptive (1 + 1)-ES as well as on the complex (1 + 1)-CMA-ES for constrained optimization. The results of both ES on a linearly-constrained test bed look promising.

Keywords

Binary Search Cumulate Amount Infeasible Solution Cholesky Factor Constraint Boundary 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arnold, D.V., Hansen, N.: Active Covariance Matrix Adaptation for the (1+1)-CMA-ES. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, Portland, United States, pp. 385–392 (2010)Google Scholar
  2. 2.
    Arnold, D.V., Hansen, N.: A (1+1)-CMA-ES for constrained optimisation. In: Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference, GECCO 2012, pp. 297–304. ACM, New York (2012)CrossRefGoogle Scholar
  3. 3.
    Coello, C.A.C.: Constraint-handling techniques used with evolutionary algorithms. In: Soule, T., Moore, J.H. (eds.) GECCO (Companion), pp. 849–872. ACM (2012)Google Scholar
  4. 4.
    Gieseke, F., Kramer, O.: Towards non-linear constraint estimation for expensive optimization. In: Esparcia-Alcázar, A.I. (ed.) EvoApplications 2013. LNCS, vol. 7835, pp. 459–468. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  5. 5.
    de Groen, P.P.N.: An introduction to total least squares (1998)Google Scholar
  6. 6.
    Hansen, N.: The CMA evolution strategy: A comparing review. In: Lozano, J., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation. STUDFUZZ, vol. 192, pp. 75–102. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Igel, C., Suttorp, T., Hansen, N.: A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO 2006, pp. 453–460. ACM, New York (2006)Google Scholar
  8. 8.
    Jin, Y.: Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation 1(2), 61–70 (2011)CrossRefGoogle Scholar
  9. 9.
    Kramer, O., Barthelmes, A., Rudolph, G.: Surrogate constraint functions for CMA evolution strategies. In: Mertsching, B., Hund, M., Aziz, Z. (eds.) KI 2009. LNCS, vol. 5803, pp. 169–176. Springer, Heidelberg (2009)Google Scholar
  10. 10.
    Parzen, E.: On estimation of a probability density function and mode. The Annals of Mathematical Statistics 33(3), 1065–1076 (1962)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Poloczek, J., Kramer, O.: Local SVM constraint surrogate models for self-adaptive evolution strategies. In: Timm, I.J., Thimm, M. (eds.) KI 2013. LNCS, vol. 8077, pp. 164–175. Springer, Heidelberg (2013)Google Scholar
  12. 12.
    Rechenberg, I.: Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. PhD thesis, TU Berlin (1973)Google Scholar
  13. 13.
    Santana-Quintero, L.V., Montaño, A.A., Coello, C.A.C.: A review of techniques for handling expensive functions in evolutionary multi-objective optimization. In: Tenne, Y., Goh, C.-K. (eds.) Computational Intelligence in Expensive Optimization Problems. Adaptation, Learning, and Optimization, vol. 2, pp. 29–59. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman & Hall, London (1986)CrossRefMATHGoogle Scholar
  15. 15.
    Suttorp, T., Hansen, N., Igel, C.: Efficient Covariance Matrix Update for Variable Metric Evolution Strategies. Machine Learning (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jendrik Poloczek
    • 1
  • Oliver Kramer
    • 1
  1. 1.Computational Intelligence Group, Department of Computing ScienceUniversity of OldenburgGermany

Personalised recommendations