Enhancing the Performance of Maximum–Likelihood Gaussian EDAs Using Anticipated Mean Shift

  • Peter A. N. Bosman
  • Jörn Grahl
  • Dirk Thierens
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5199)

Abstract

Many Estimation–of–Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard–Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Beyer, H.-G., Deb, K.: On self–adaptive features in real–parameter evolutionary algorithms. IEEE Transactions on Evolutionary Computation 5(3), 250–270 (2001)CrossRefGoogle Scholar
  2. 2.
    Bosman, P.A.N., Grahl, J., Rothlauf, F.: SDR: A better trigger for adaptive variance scaling in normal EDAs. In: Thierens, D., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2007, pp. 492–499. ACM Press, New York (2007)Google Scholar
  3. 3.
    Bosman, P.A.N., Grahl, J., Thierens, D.: Adapted maximum–likelihood Gaussian models for numerical optimization with continuous EDAs. CWI technical report SEN–E0704 (2007)Google Scholar
  4. 4.
    Bosman, P.A.N., Thierens, D.: Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 767–776. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  5. 5.
    Fogel, D.-B., Beyer, H.-G.: A note on the empirical evaluation of intermediate recombination. Evolutionary Computation 3(4), 491–495 (1996)CrossRefGoogle Scholar
  6. 6.
    Gallagher, M., Frean, M.: Population–based continuous optimization, probabilistic modelling and mean shift. Evolutionary Computation 13(1), 29–42 (2005)CrossRefGoogle Scholar
  7. 7.
    González, C., Lozano, J.A., Larrañaga, P.: Mathematical modelling of UMDAc algorithm with tournament selection. Behaviour on linear and quadratic functions 31(3), 313–340 (2002)MATHGoogle Scholar
  8. 8.
    Grahl, J., Bosman, P.A.N., Rothlauf, F.: The correlation–triggered adaptive variance scaling IDEA. In: Keijzer, M., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2006, pp. 397–404. ACM Press, New York (2006)Google Scholar
  9. 9.
    Grahl, J., Minner, S., Rothlauf, F.: Behaviour of UMDAc with truncation selection on monotonous functions. In: Corne, D., et al. (eds.) Proceedings of the IEEE Congress on Evol. Comp. — CEC–2005, pp. 2553–2559. IEEE Computer Society Press, Los Alamitos (2005)CrossRefGoogle Scholar
  10. 10.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA–ES). Evolutionary Computation 11(1), 1–18 (2003)CrossRefGoogle Scholar
  11. 11.
    Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.: Optimization in continuous domains by learning and simulation of Gaussian networks. In: Pelikan, M., et al. (eds.) Proc. of the OBUPM Workshop at the Genetic and Evol. Comp. Conf. — GECCO–2000, pp. 201–204. Morgan Kaufmann, San Francisco (2000)Google Scholar
  12. 12.
    Lauritzen, S.L.: Graphical Models. Clarendon Press (1996)Google Scholar
  13. 13.
    Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E.: Towards a New Evolutionary Computation. Advances in Estimation of Distribution Algorithms (2006)Google Scholar
  14. 14.
    Mühlenbein, H., Höns, R.: The estimation of distributions and the minimum relative entropy principle. Evolutionary Computation 13(1), 1–27 (2005)CrossRefGoogle Scholar
  15. 15.
    Ocenasek, J., Kern, S., Hansen, N., Müller, S., Koumoutsakos, P.: A mixed Bayesian optimization algorithm with variance adaptation. In: Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 352–361. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Pelikan, M., Sastry, K., Cantú-Paz, E.: Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Rudlof, S., Köppen, M.: Stochastic hill climbing with learning by vectors of normal distributions. In: Furuhashi, T. (ed.) Proceedings of the First Online Workshop on Soft Computing — WSC1, pp. 60–70. Nagoya Univ. (1996)Google Scholar
  18. 18.
    Sebag, M., Ducoulombier, A.: Extending population-based incremental learning to continuous search spaces. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 418–427. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  19. 19.
    Yunpeng, C., Xiaomin, S., Hua, X., Peifa, J.: Cross entropy and adaptive variance scaling in continuous EDA. In: Thierens, D., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2007, pp. 609–616. ACM Press, New York (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Peter A. N. Bosman
    • 1
  • Jörn Grahl
    • 2
  • Dirk Thierens
    • 3
  1. 1.Centre for Mathematics and Computer ScienceAmsterdamThe Netherlands
  2. 2.University of MannheimMannheimGermany
  3. 3.Department of Information and Computing SciencesUtrecht UniversityUtrechtThe Netherlands

Personalised recommendations