Skip to main content
Log in

Self-Adjusting Mutation Rates with Provably Optimal Success Rules

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The one-fifth success rule is one of the best-known and most widely accepted techniques to control the parameters of evolutionary algorithms. While it is often applied in the literal sense, a common interpretation sees the one-fifth success rule as a family of success-based updated rules that are determined by an update strength F and a success rate. We analyze in this work how the performance of the (1+1) Evolutionary Algorithm on Leading Ones depends on these two hyper-parameters. Our main result shows that the best performance is obtained for small update strengths \(F=1+o(1)\) and success rate 1/e. We also prove that the running time obtained by this parameter setting is, apart from lower order terms, the same that is achieved with the best fitness-dependent mutation rate. We show similar results for the resampling variant of the (1+1) Evolutionary Algorithm, which enforces to flip at least one bit per iteration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. In the preliminary version [13] of this work, we made the stronger claim that our algorithm is asymptotically optimal among all dynamic choices of the mutation rate of the \({(1 + 1)}\) EA. While we still believe this to be true, we note that this claim is more substantial than we originally thought. The main difficulty is that it is less obvious than thought whether the result of [3] extends to all dynamic choices of the mutation rate, in other words, that no asymptotically non-negligible performance gains can be made from letting the mutation rate not only depend on the fitness of the parent, but on the whole history of the process. We strongly believe that such a statement can be shown with the theory of Markov decision processes. Since this would be a deep mathematical analysis not focused on the center of this work (the analysis of multiplicative update rules), we prefer to not conduct this proof and rather reduce our original optimality claim. We are thankful for a comment of an anonymous reviewer that led to the discovery of this gap in our previous optimality statement.

References

  1. Aleti, A., Moser, I.: A systematic literature review of adaptive parameter control methods for evolutionary algorithms. ACM Comput. Surv. 49, 561–5635 (2016)

    Article  Google Scholar 

  2. Auger, A.: Benchmarking the (1+1) evolution strategy with one-fifth success rule on the BBOB-2009 function testbed. In: Companion Material for Proc. of Genetic and Evolutionary Computation Conference (GECCO’09), pp. 2447–2452. ACM (2009)

  3. Böttcher, S., Doerr, B., Neumann, F.: Optimal fixed and adaptive mutation rates for the LeadingOnes problem. In: Proc. of Parallel Problem Solving from Nature (PPSN’10), Lecture Notes in Computer Science, vol. 6238, pp. 1–10. Springer (2010)

  4. Buzdalov, M., Doerr, B., Doerr, C., Vinokurov, D.: Fixed-target runtime analysis. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’20), pp. 1295–1303. ACM (2020). https://doi.org/10.1145/3377930.3390184

  5. Carvalho Pinto, E., Doerr, C.: Discussion of a more practice-aware runtime analysis for evolutionary algorithms. In: Proc. of Artificial Evolution (EA’17), pp. 298–305 (2017). https://ea2017.inria.fr//EA2017_Proceedings_web_ISBN_978-2-9539267-7-4.pdf. Extended version available online at arxiv: abs/1812.00493

  6. Carvalho Pinto, E., Doerr, C.: A simple proof for the usefulness of crossover in black-box optimization. In: Proc. of Parallel Problem Solving from Nature (PPSN’18), Lecture Notes in Computer Science, vol. 11102, pp. 29–41. Springer (2018). https://doi.org/10.1007/978-3-319-99259-4_3

  7. Costa, L.D., Fialho, Á., Schoenauer, M., Sebag, M.: Adaptive operator selection with dynamic multi-armed bandits. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’08), pp. 913–920. ACM (2008)

  8. Devroye, L.: The compound random search. Ph.D. dissertation, Purdue Univ., West Lafayette, IN (1972)

  9. Doerr, B.: Analyzing randomized search heuristics via stochastic domination. Theor. Comput. Sci. 773, 115–137 (2019)

    Article  MathSciNet  Google Scholar 

  10. Doerr, B.: Probabilistic tools for the analysis of randomized optimization heuristics. In: B. Doerr, F. Neumann (eds.) Theory of Evolutionary Computation: Recent Developments in Discrete Optimization, pp. 1–87. Springer (2020). Also available at arxiv: abs/1801.06733

  11. Doerr, B., Doerr, C.: Optimal static and self-adjusting parameter choices for the \((1+(\lambda ,\lambda ))\) genetic algorithm. Algorithmica 80, 1658–1709 (2018)

    Article  MathSciNet  Google Scholar 

  12. Doerr, B., Doerr, C.: Theory of parameter control for discrete black-box optimization: Provable performance gains through dynamic parameter choices. In: Theory of Evolutionary Computation: Recent Developments in Discrete Optimization, pp. 271–321. Springer (2020). Also available at arxiv: 1804.05650

  13. Doerr, B., Doerr, C., Lengler, J.: Self-adjusting mutation rates with provably optimal success rules. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’19), pp. 1479–1487. ACM (2019). https://doi.org/10.1145/3321707.3321733

  14. Doerr, B., Doerr, C., Yang, J.: \(k\)-bit mutation with self-adjusting \(k\) outperforms standard bit mutation. In: Proc. of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 824–834. Springer (2016)

  15. Doerr, B., Jansen, T., Witt, C., Zarges, C.: A method to derive fixed budget results from expected optimisation times. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’13), pp. 1581–1588. ACM (2013). https://doi.org/10.1145/2463372.2463565

  16. Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64, 673–697 (2012)

    Article  MathSciNet  Google Scholar 

  17. Doerr, B., Kötzing, T.: Lower bounds from fitness levels made easy. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’21), pp. 1142–1150. ACM (2021). https://doi.org/10.1145/3449639.3459352

  18. Doerr, B., Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: On the runtime analysis of selection hyper-heuristics with adaptive learning periods. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’18), pp. 1015–1022. ACM (2018)

  19. Doerr, B., Witt, C., Yang, J.: Runtime analysis for self-adaptive mutation rates. Algorithmica 83, 1012–1053 (2021)

    Article  MathSciNet  Google Scholar 

  20. Doerr, C., Wagner, M.: On the effectiveness of simple success-based parameter selection mechanisms for two classical discrete black-box optimization benchmark problems. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’18), pp. 943–950. ACM (2018). https://doi.org/10.1145/3205455.3205560

  21. Eiben, Á.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evolut. Comput. 3, 124–141 (1999)

    Article  Google Scholar 

  22. Fialho, Á., Costa, L.D., Schoenauer, M., Sebag, M.: Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 60, 25–64 (2010). https://doi.org/10.1007/s10472-010-9213-y

    Article  MathSciNet  MATH  Google Scholar 

  23. Hansen, N., Gawelczyk, A., Ostermeier, A.: Sizing the population with respect to the local progress in (1,\(\lambda \))-evolution strategies - a theoretical analysis. In: Proc. of Congress on Evolutionary Computation (CEC’95), pp. 80–85. IEEE (1995)

  24. Jansen, T., De Jong, K.A., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evolut. Comput. 13, 413–440 (2005)

    Article  Google Scholar 

  25. Karafotias, G., Eiben, Á.E., Hoogendoorn, M.: Generic parameter control with reinforcement learning. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1319–1326. ACM (2014). https://doi.org/10.1145/2576768.2598360

  26. Karafotias, G., Hoogendoorn, M., Eiben, Á.E.: Parameter control in evolutionary algorithms: trends and challenges. IEEE Trans. Evolut. Comput. 19, 167–187 (2015)

    Article  Google Scholar 

  27. Kern, S., Müller, S.D., Hansen, N., Büche, D., Ocenasek, J., Koumoutsakos, P.: Learning probability distributions in continuous evolutionary algorithms - a comparative review. Natural Comput. 3, 77–112 (2004)

    Article  MathSciNet  Google Scholar 

  28. Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proc. of Foundations of Genetic Algorithms (FOGA’11), pp. 181–192. ACM (2011)

  29. Lehre, P.K., Witt, C.: Black-box search by unbiased variation. Algorithmica 64, 623–642 (2012)

    Article  MathSciNet  Google Scholar 

  30. Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: On the runtime analysis of generalised selection hyper-heuristics for pseudo-Boolean optimisation. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’17), pp. 849–856. ACM (2017)

  31. Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: Simple hyper-heuristics control the neighbourhood size of randomised local search optimally for LeadingOnes. Evolut. Comput. 28(3), 437–461 (2020). https://doi.org/10.1162/evco_a_00258

    Article  Google Scholar 

  32. Rechenberg, I.: Evolutionsstrategie. Friedrich Fromman Verlag (Günther Holzboog KG), Stuttgart (1973)

    Google Scholar 

  33. Rodionova, A., Antonov, K., Buzdalova, A., Doerr, C.: Offspring population size matters when comparing evolutionary algorithms with self-adjusting mutation rates. In: Proc. of Genetic and Evolutionary Computation Conference (GECCO’19), pp. 855–863. ACM (2019). https://doi.org/10.1145/3321707.3321827

  34. Schumer, M.A., Steiglitz, K.: Adaptive step size random search. IEEE Trans. Autom. Control 13, 270–276 (1968)

    Article  Google Scholar 

  35. Sudholt, D.: A new method for lower bounds on the running time of evolutionary algorithms. IEEE Trans. Evolut. Comput. 17, 418–435 (2013)

    Article  Google Scholar 

  36. Trench, W.F.: Introduction to real analysis. Open Textbook Initiative, American Institute of Mathematics (2013)

  37. Wegener, I.: Theoretical aspects of evolutionary algorithms. In: F. Orejas, P.G. Spirakis, J. van Leeuwen (eds.) Proc. of the 28th International Colloquium on Automata, Languages and Programming (ICALP’01), Lecture Notes in Computer Science, vol. 2076, pp. 64–78. Springer (2001)

Download references

Acknowledgements

Our work was supported by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, and by the Paris Ile-de-France Region.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carola Doerr.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

An extended abstract announcing the results presented in this work has been communicated at GECCO’19 [13].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Doerr, B., Doerr, C. & Lengler, J. Self-Adjusting Mutation Rates with Provably Optimal Success Rules. Algorithmica 83, 3108–3147 (2021). https://doi.org/10.1007/s00453-021-00854-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-021-00854-3

Keywords

Navigation