Variance Scaling for EDAs Revisited

  • Oliver Kramer
  • Fabian Gieseke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7006)

Abstract

Estimation of distribution algorithms (EDAs) are derivative-free optimization approaches based on the successive estimation of the probability density function of the best solutions, and their subsequent sampling. It turns out that the success of EDAs in numerical optimization strongly depends on scaling of the variance. The contribution of this paper is a comparison of various adaptive and self-adaptive variance scaling techniques for a Gaussian EDA. The analysis includes: (1) the Gaussian EDA without scaling, but different selection pressures and population sizes, (2) the variance adaptation technique known as Silverman’s rule-of-thumb, (3) σ-self-adaptation known from evolution strategies, and (4) transformation of the solution space by estimation of the Hessian. We discuss the results for the sphere function, and its constrained counterpart.

Keywords

Evolution Strategy Kernel Density Estimation Sphere Function Premature Convergence Variance Scaling 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Beyer, H.-G., Schwefel, H.-P.: Evolution strategies – a comprehensive introduction. Natural Computing 1(1), 3–52 (2002)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Bosman, P.A.N., Grahl, J.: Matching inductive search bias and problem structure in continuous estimation-of-distribution algorithms. European Journal of Operational Research 185(3), 1246–1264 (2008)CrossRefMATHGoogle Scholar
  3. 3.
    Bosman, P.A.N., Grahl, J., Thierens, D.: Enhancing the performance of maximum-likelihood gaussian edas using anticipated mean shift. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 133–143. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Bosman, P.A.N., Thierens, D.: Expanding from discrete to continuous estimation of distribution algorithms: The idea. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 767–776. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  5. 5.
    Bosman, P.A.N., Thierens, D.: Numerical optimization with real-valued estimation-of-distribution algorithms. In: Scalable Optimization via Probabilistic Modeling, pp. 91–120 (2006)Google Scholar
  6. 6.
    Bosman, P.A.N., Thierens, D.: Adaptive variance scaling in continuous multi-objective estimation-of-distribution algorithms. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 500–507. ACM Press, New York (2007)Google Scholar
  7. 7.
    Cai, Y., Sun, X., Xu, H., Jia, P.: Cross entropy and adaptive variance scaling in continuous eda. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 609–616. ACM Press, New York (2007)Google Scholar
  8. 8.
    Grahl, J., Minner, S., Rothlauf, F.: Behaviour of umda\(_{\mbox{c}}\) with truncation selection on monotonous functions. In: Congress on Evolutionary Computation (CEC), pp. 2553–2559 (2005)Google Scholar
  9. 9.
    Grefenstette, J.: Optimization of control parameters for genetic algorithms. IEEE Trans. Syst. Man Cybern. 16(1), 122–128 (1986)CrossRefGoogle Scholar
  10. 10.
    Jong, K.A.D.: An analysis of the behavior of a class of genetic adaptive systems. PhD thesis, University of Michigan (1975)Google Scholar
  11. 11.
    Kramer, O.: Premature convergence in constrained continuous search spaces. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 62–71. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Kramer, O.: Premature convergence in constrained continuous search spaces. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 62–71. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  13. 13.
    Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Optimization in continuous domains by learning and simulation of gaussian networks. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 201–204. ACM Press, New York (2000)Google Scholar
  14. 14.
    Mühlenbein, H.: How genetic algorithms really work: Mutation and hillclimbing. In: Parallel Problem Solving from Nature (PPSN), pp. 15–26 (1992)Google Scholar
  15. 15.
    Nadaraya, E.: On estimating regression. Theory of Probability and Its Application 10, 186–190 (1964)CrossRefMATHGoogle Scholar
  16. 16.
    Ocenasek, J., Kern, S., Hansen, N., Koumoutsakos, P.: A mixed bayesian optimization algorithm with variance adaptation. In: Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 352–361. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Ostermeier, A., Gawelczyk, A., Hansen, N.: A derandomized approach to self adaptation of evolution strategies. Evolutionary Computation 2(4), 369–380 (1995)CrossRefGoogle Scholar
  18. 18.
    Rudlof, S., Köppen, M.: Stochastic hill climbing by vectors of normal distributions. In: Proceedings of the 1st Online Workshop on Soft Computing, Nagoya, Japan (1996)Google Scholar
  19. 19.
    Rudolph, G.: On correlated mutations in evolution strategies. In: Parallel Problem Solving from Nature (PPSN), pp. 107–116 (1992)Google Scholar
  20. 20.
    Schaffer, J.D., Caruana, R., Eshelman, L.J., Das, R.: A study of control parameters affecting online performance of genetic algorithms for function optimization. In: International Conference on Genetic Algorithms - ICGA 1989, pp. 51–60 (1989)Google Scholar
  21. 21.
    Schwefel, H.-P.: Adaptive Mechanismen in der biologischen Evolution und ihr Einfluss auf die Evolutionsgeschwindigkeit. Interner Bericht der Arbeitsgruppe Bionik und Evolutionstechnik am Institut für Mess- und Regelungstechnik, TU Berlin (July 1974)Google Scholar
  22. 22.
    Sebag, M., Ducoulombier, A.: Extending population-based incremental learning to continuous search spaces. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 418–427. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  23. 23.
    Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability, vol. 26. Chapman and Hall, London (1986)CrossRefMATHGoogle Scholar
  24. 24.
    Watson, G.: Smooth regression analysis. Sankhya Series A 26, 359–372 (1964)MathSciNetMATHGoogle Scholar
  25. 25.
    Yuan, B., Gallagher, M.: On the importance of diversity maintenance in estimation of distribution algorithms. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 719–726. ACM Press, New York (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Oliver Kramer
    • 1
  • Fabian Gieseke
    • 1
  1. 1.Department InformatikCarl von Ossietzky Universität OldenburgOldenburgGermany

Personalised recommendations