Abstract
Estimation of distribution algorithms (EDAs) are derivative-free optimization approaches based on the successive estimation of the probability density function of the best solutions, and their subsequent sampling. It turns out that the success of EDAs in numerical optimization strongly depends on scaling of the variance. The contribution of this paper is a comparison of various adaptive and self-adaptive variance scaling techniques for a Gaussian EDA. The analysis includes: (1) the Gaussian EDA without scaling, but different selection pressures and population sizes, (2) the variance adaptation technique known as Silverman’s rule-of-thumb, (3) σ-self-adaptation known from evolution strategies, and (4) transformation of the solution space by estimation of the Hessian. We discuss the results for the sphere function, and its constrained counterpart.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Beyer, H.-G., Schwefel, H.-P.: Evolution strategies – a comprehensive introduction. Natural Computing 1(1), 3–52 (2002)
Bosman, P.A.N., Grahl, J.: Matching inductive search bias and problem structure in continuous estimation-of-distribution algorithms. European Journal of Operational Research 185(3), 1246–1264 (2008)
Bosman, P.A.N., Grahl, J., Thierens, D.: Enhancing the performance of maximum-likelihood gaussian edas using anticipated mean shift. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 133–143. Springer, Heidelberg (2008)
Bosman, P.A.N., Thierens, D.: Expanding from discrete to continuous estimation of distribution algorithms: The idea. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 767–776. Springer, Heidelberg (2000)
Bosman, P.A.N., Thierens, D.: Numerical optimization with real-valued estimation-of-distribution algorithms. In: Scalable Optimization via Probabilistic Modeling, pp. 91–120 (2006)
Bosman, P.A.N., Thierens, D.: Adaptive variance scaling in continuous multi-objective estimation-of-distribution algorithms. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 500–507. ACM Press, New York (2007)
Cai, Y., Sun, X., Xu, H., Jia, P.: Cross entropy and adaptive variance scaling in continuous eda. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 609–616. ACM Press, New York (2007)
Grahl, J., Minner, S., Rothlauf, F.: Behaviour of umda\(_{\mbox{c}}\) with truncation selection on monotonous functions. In: Congress on Evolutionary Computation (CEC), pp. 2553–2559 (2005)
Grefenstette, J.: Optimization of control parameters for genetic algorithms. IEEE Trans. Syst. Man Cybern. 16(1), 122–128 (1986)
Jong, K.A.D.: An analysis of the behavior of a class of genetic adaptive systems. PhD thesis, University of Michigan (1975)
Kramer, O.: Premature convergence in constrained continuous search spaces. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 62–71. Springer, Heidelberg (2008)
Kramer, O.: Premature convergence in constrained continuous search spaces. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 62–71. Springer, Heidelberg (2008)
Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Optimization in continuous domains by learning and simulation of gaussian networks. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 201–204. ACM Press, New York (2000)
Mühlenbein, H.: How genetic algorithms really work: Mutation and hillclimbing. In: Parallel Problem Solving from Nature (PPSN), pp. 15–26 (1992)
Nadaraya, E.: On estimating regression. Theory of Probability and Its Application 10, 186–190 (1964)
Ocenasek, J., Kern, S., Hansen, N., Koumoutsakos, P.: A mixed bayesian optimization algorithm with variance adaptation. In: Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 352–361. Springer, Heidelberg (2004)
Ostermeier, A., Gawelczyk, A., Hansen, N.: A derandomized approach to self adaptation of evolution strategies. Evolutionary Computation 2(4), 369–380 (1995)
Rudlof, S., Köppen, M.: Stochastic hill climbing by vectors of normal distributions. In: Proceedings of the 1st Online Workshop on Soft Computing, Nagoya, Japan (1996)
Rudolph, G.: On correlated mutations in evolution strategies. In: Parallel Problem Solving from Nature (PPSN), pp. 107–116 (1992)
Schaffer, J.D., Caruana, R., Eshelman, L.J., Das, R.: A study of control parameters affecting online performance of genetic algorithms for function optimization. In: International Conference on Genetic Algorithms - ICGA 1989, pp. 51–60 (1989)
Schwefel, H.-P.: Adaptive Mechanismen in der biologischen Evolution und ihr Einfluss auf die Evolutionsgeschwindigkeit. Interner Bericht der Arbeitsgruppe Bionik und Evolutionstechnik am Institut für Mess- und Regelungstechnik, TU Berlin (July 1974)
Sebag, M., Ducoulombier, A.: Extending population-based incremental learning to continuous search spaces. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 418–427. Springer, Heidelberg (1998)
Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability, vol. 26. Chapman and Hall, London (1986)
Watson, G.: Smooth regression analysis. Sankhya Series A 26, 359–372 (1964)
Yuan, B., Gallagher, M.: On the importance of diversity maintenance in estimation of distribution algorithms. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 719–726. ACM Press, New York (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kramer, O., Gieseke, F. (2011). Variance Scaling for EDAs Revisited. In: Bach, J., Edelkamp, S. (eds) KI 2011: Advances in Artificial Intelligence. KI 2011. Lecture Notes in Computer Science(), vol 7006. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24455-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-24455-1_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24454-4
Online ISBN: 978-3-642-24455-1
eBook Packages: Computer ScienceComputer Science (R0)