GECCO 2003: Genetic and Evolutionary Computation — GECCO 2003 pp 610-621 | Cite as
Model-Assisted Steady-State Evolution Strategies
Abstract
The task of speeding up the optimization process on problems with very time consuming fitness functions is a central point in evolutionary computation. Applying models as a surrogate of the real fitness function is a quite popular idea. The performance of this approach is highly dependent on the frequency of how often the model is updated with data from new fitness evaluations. However, in generation based algorithms this is only done every λ-th fitness evaluation. To overcome this problem we use a steady-state strategy, which updates the model immediately after each fitness evaluation. We present a new model assisted steady-state Evolution Strategy (ES), which uses Radial-Basis-Function networks as a model. To support self-adaption in the steady-state algorithm a median selection scheme is applied. The convergence behavior of the new algorithm is examined with numerical results from extensive simulations on several high dimensional test functions. It achieves better results than standard ES, steady -state ES or model assisted ES.
Preview
Unable to display preview. Download preview PDF.
References
- 1.C.M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.Google Scholar
- 2.M. EL-Beltagy, P. Nair, and A. Keane. Metamodeling techniques for evolutionary optimization of computationally expensive problems: promises and limitations. In GECCO 2000 Proceedings of Genetic and Evolutionary Computation Conference, pages 196–203, 1999.Google Scholar
- 3.M. A. El-Beltagy and A. J. Keane. Evolutionary optimization for computationally expensive problems using gaussian processes. In CSREA Press Hamid Arabnia, editor, Proc. Int. Conf. on Artificial Intelligence IC-AI’2001, pages 708–714, 2001.Google Scholar
- 4.M. Emmerich, A. Giotis, M. Multlu Özdemir, T. Bäck, and K. Giannakoglou. Metamodel-assisted evolution strategies. In Parallel Problem Solving from Nature VII, pages 362–370, 2002.Google Scholar
- 5.N. Hansen and A. Ostermeier. Convergence properties of evolution strategies with the derandomized covariance matrix adaptation: The (μ/μ i, λ)-cma-es. In 5th European Congress on Intelligent Techniques and Soft Computing, pages 650–654, 1997.Google Scholar
- 6.Y. Jin, M. Olhofer, and B. Sendhoff. On evolutionary optimisation with approximate fitness functions. In GECCO 2000 Proceedings of Genetic and Evolutionary Computation Conference, pages 786–793, 2000.Google Scholar
- 7.Y. Jin, M. Olhofer, and B. Sendhoff. A framework for evolutionary optimization with approximate fitness functions. IEEE Transactions on Evolutionary Compu tation. March 2002 (in press). (ISSN: 1089-778X), 2002.Google Scholar
- 8.Y. Jin and B. Sendhoff. Fitness approximation in evolutionary computing-a survey. In GECCO 2002 Proceedings of Genetic and Evolutionary Computation Conference, pages 1105–1111, 2002.Google Scholar
- 9.A. Ratle. Accelearating the convergence of evolutionary algorithms by fitness landscape approximation. In A. Eiben et al, editor, Parallel Problem Solving from Nature V, pages 87–96, 1998.Google Scholar
- 10.I. Rechenberg. Evolutionsstrategie’ 94. frommann-holzboog, Stuttgart, 1994.Google Scholar
- 11.H.-P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Birkhäuser, Basel, 1977.MATHGoogle Scholar
- 12.H.-P. Schwefel. Natural evolution and collective optimum-seeking. In PA. Sydow, editor, Computational Systems Analysis: Topics and Trends, pages 5–14, 1992.Google Scholar
- 13.J. Wakunda and A. Zell. A new selection scheme for steady-state evolution strategies. In Darell Whitley, David Goldberg, Erick Cantú-Paz, Lee Spector, Ian Parmee, and Hans-Georg Beyer, editors, Proceedings of the Genetic and Evolution ary Computation Conference, GECCO 2000, pages 794–801, Las Vegas, Nevada, July 10–12 2000. Morgan Kaufmann Publishers, San Francisco, California.Google Scholar