Advertisement

Algorithms (X, sigma, eta): Quasi-random Mutations for Evolution Strategies

  • Anne Auger
  • Mohammed Jebalia
  • Olivier Teytaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3871)

Abstract

Randomization is an efficient tool for global optimization. We here define a method which keeps :

– the order 0 of evolutionary algorithms (no gradient) ;

– the stochastic aspect of evolutionary algorithms ;

– the efficiency of so-called ”low-dispersion” points ;

and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation ; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization ; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results.

Keywords

Evolutionary Algorithm Evolution Strategy Linear Convergence Evolution Strategy Gaussian Sampling 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Auger, A.: Convergence results for (1,λ)-SA-ES using the theory of ϕ-irreducible markov chains. Theoretical Computer Science 334(1-3), 35–69 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Beyer, H.-G.: The Theory of Evolution Strategies. Springer, Heidelberg (2001)CrossRefzbMATHGoogle Scholar
  3. 3.
    Cerf, R.: An asymptotic theory of genetic algorithms. In: Alliot, J.-M., Ronald, E., Lutton, E., Schoenauer, M., Snyers, D. (eds.) AE 1995. LNCS, vol. 1063, pp. 37–53. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  4. 4.
    Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theoretical Computer Science 276, 51–81 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Fang, K., Wang, Y.: Number-Theoretic Methods in Statistics. Chapman and Hall, London (1994)CrossRefzbMATHGoogle Scholar
  6. 6.
    Garnier, J., Kallel, L., Schoenauer, M.: Rigorous hitting times for binary mutations. Evolutionary Computation 7(2), 167–203 (1999)CrossRefGoogle Scholar
  7. 7.
    Goldfeld, S., Quandt, R., Trotter, H.: Maximization by quadratic hill climbing. Econometrica 34(3), 541 (1966)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Hansen, N., Ostermeier, A.: Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  9. 9.
    Landrin-Schweitzer, Y., Lutton, E.: Perturbation theory for eas: towards an estimation of convergence speed. In: Schoenauer, M., Deb, K., Rudolph, G., Yao, X., Lutton, E., Merelo, J., Schwefel, H.-P. (eds.) PPSN VI, Springer, Heidelberg (2000)Google Scholar
  10. 10.
    Meyer, Y.: Wavelets, Vibrations and Scaling. CRM Monograph Series. American Mathematical Society (1997)Google Scholar
  11. 11.
    Niedereiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992)CrossRefGoogle Scholar
  12. 12.
    Rechenberg, I.: Evolutionstrategie: Optimierung Technisher Systeme nach Prinzipien des Biologischen Evolution. Fromman-Hozlboog Verlag, Stuttgart (1973)Google Scholar
  13. 13.
    Rudolph, G.: Convergence analysis of canonical genetic algorithm. IEEE Transactions on Neural Networks 5(1), 96–101 (1994)CrossRefGoogle Scholar
  14. 14.
    Rudolph, G.: Convergence of non-elitist strategies. In: Michalewicz, Z., Schaffer, J.D., Schwefel, H.-P., Fogel, D.B., Kitano, H. (eds.) Proceedings of the First IEEE International Conference on Evolutionary Computation, pp. 63–66. IEEE Press, Los Alamitos (1994)Google Scholar
  15. 15.
    Rudolph, G.: How mutation and selection solve long path problems in polynomial expected time. Evolutionary Computation 4, 195–205 (1996)CrossRefGoogle Scholar
  16. 16.
    Rudolph, G.: Convergence rates of evolutionary algorithms for a class of convex objective functions. Control and Cybernetics 26(3), 375–390 (1997)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Schwefel, H.-P.: Numerical Optimization of Computer Models. John Wiley & Sons, New-York (1981), 2nd edn. (1995)zbMATHGoogle Scholar
  18. 18.
    Tricot, C.: Curves and Fractal Dimension. Springer, Heidelberg (1995)CrossRefzbMATHGoogle Scholar
  19. 19.
    Vehel, J.L., Lutton, E.: Holder functions and deception of genetic algorithms. IEEE transactions on Evolutionary computing 2(2) (1998)Google Scholar
  20. 20.
    Yakowitz, S., L’Ecuyer, P., Vazquez-Abad, F.: Global stochastic optimization with low-dispersion point sets (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Anne Auger
    • 3
  • Mohammed Jebalia
    • 1
  • Olivier Teytaud
    • 1
    • 2
  1. 1.Equipe TAO – INRIA Futurs, LRI, Bât. 490Université Paris-SudOrsayFrance
  2. 2.ArtelysParisFrance
  3. 3.CoLab, ETH Zentrum CAB F 84ZürichSwitzerland

Personalised recommendations