Skip to main content

Advertisement

Log in

Fast Mutation in Crossover-Based Algorithms

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The heavy-tailed mutation operator proposed in Doerr et al. (GECCO 2017), called fast mutation to agree with the previously used language, so far was proven to be advantageous only in mutation-based algorithms. There, it can relieve the algorithm designer from finding the optimal mutation rate and nevertheless obtain a performance close to the one that the optimal mutation rate gives. In this first runtime analysis of a crossover-based algorithm using a heavy-tailed choice of the mutation rate, we show an even stronger impact. For the \((1+(\lambda ,\lambda ))\) genetic algorithm optimizing the OneMax benchmark function, we show that with a heavy-tailed mutation rate a linear runtime can be achieved. This is asymptotically faster than what can be obtained with any static mutation rate, and is asymptotically equivalent to the runtime of the self-adjusting version of the parameters choice of the \((1+(\lambda ,\lambda ))\) genetic algorithm. This result is complemented by an empirical study which shows the effectiveness of the fast mutation also on random satisfiable MAX-3SAT instances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. As a reviewer of [1] pointed out, in [28] an upper bound was shown for the runtime of the \((1 + 1)\) EA with general mutation rate on the hurdle problem with hurdle width 2 and 3. This upper bound is minimized by the mutation rates \(\frac{2}{n}\) and \(\frac{3}{n}\). This could have been seen earlier as a hint that larger mutation rates can be useful. Since the central research question discussed in [28] was whether crossover is beneficial or not, apparently this detail was overlooked by the broader scientific audience.

  2. We note that the work [5] conducted in parallel to ours suggests that a different choice is necessary when large fitness valleys need to be crossed.

  3. This mutation can be interpreted as a standard bit mutation with rate \(\frac{\alpha }{n}\), but conditional on having the same number of flipped bits for all individuals.

  4. https://github.com/mbuzdalov/generic-onell.

References

  1. Antipov, D., Buzdalov, M., Doerr, B.: Fast mutation in crossover-based algorithms. In Genetic and Evolutionary Computation Conference, GECCO 2020, pp. 1268–1276. ACM (2020)

  2. Auger, A., Doerr, B. (eds.): Theory of Randomized Search Heuristics. World Scientific Publishing (2011)

  3. Antipov, D., Doerr, B.: Runtime analysis of a heavy-tailed \((1+(\lambda , \lambda ))\) genetic algorithm on jump functions. In: Parallel Problem Solving From Nature, PPSN 2020, Part II, pp. 545–559. Springer (2020)

  4. Antipov, D., Doerr, B., Karavaev, V.: A tight runtime analysis for the \({(1 + (\lambda ,\lambda ))}\) GA on LeadingOnes. In: Foundations of Genetic Algorithms, FOGA 2019, pp. 169–182. ACM (2019)

  5. Antipov, D., Doerr, B., Karavaev, V.: The \((1 + (\lambda ,\lambda ))\) GA is even faster on multimodal problems. In: Genetic and Evolutionary Computation Conference, GECCO 2020, pp. 1259–1267. ACM (2020)

  6. Bäck, T.: Optimal mutation rates in genetic search. In: International Conference on Genetic Algorithms, ICGA 1993, pp. 2–8. Morgan Kaufmann (1993)

  7. Buzdalov, M., Doerr, B.: Runtime analysis of the \({(1+(\lambda ,\lambda ))}\) genetic algorithm on random satisfiable 3-CNF formulas. In: Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 1343–1350. ACM (2017)

  8. Doerr, B., Doerr, C.: Optimal static and self-adjusting parameter choices for the \({(1+(\lambda ,\lambda ))}\) genetic algorithm. Algorithmica 80, 1658–1709 (2018)

    Article  MathSciNet  Google Scholar 

  9. Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theoret. Comput. Sci. 567, 87–104 (2015)

    Article  MathSciNet  Google Scholar 

  10. Doerr, B., Jansen, T., Sudholt, D., Winzen, C., Zarges, C.: Mutation rate matters even when optimizing monotone functions. Evol. Comput. 21, 1–21 (2013)

    Article  Google Scholar 

  11. Doerr, B., Künnemann, M.: Optimizing linear functions with the \((1+\lambda )\) evolutionary algorithm–different asymptotic runtimes for different instances. Theoret. Comput. Sci. 561, 3–23 (2015)

    Article  MathSciNet  Google Scholar 

  12. Doerr, B., Le, H.P., Makhmara, R., Nguyen, T.D.: Fast genetic algorithms. In: Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 777–784. ACM (2017)

  13. Doerr, B., Neumann, F. (eds.): Theory of Evolutionary Computation—Recent Developments in Discrete Optimization. Springer. https://cs.adelaide.edu.au/~frank/papers/TheoryBook2019-selfarchived.pdf (2020)

  14. Doerr, B.: Does comma selection help to cope with local optima? In: Genetic and Evolutionary Computation Conference, GECCO 2020, pp. 1304–1313. ACM (2020)

  15. Doerr, B.: Probabilistic tools for the analysis of randomized optimization heuristics. In: Doerr, B., Neumann, F. (eds) Theory of Evolutionary Computation: Recent Developments in Discrete Optimization, pp. 1–87. Springer, https://arxiv.org/abs/1801.06733 (2020)

  16. Garnier, J., Kallel, L., Schoenauer, M.: Rigorous hitting times for binary mutations. Evol. Comput. 7, 173–203 (1999)

    Article  Google Scholar 

  17. Goldman, B.W., Punch, W.F.: Parameter-less population pyramid. In: Genetic and Evolutionary Computation Conference, GECCO 2014, pp. 785–792. ACM (2014)

  18. Gießen, C., Witt, C.: The interplay of population size and mutation probability in the \({(1 + \lambda )}\) EA on OneMax. Algorithmica 78, 587–609 (2017)

    Article  MathSciNet  Google Scholar 

  19. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127, 51–81 (2001)

    Article  MathSciNet  Google Scholar 

  20. Jansen, T.: Analyzing Evolutionary Algorithms: The Computer Science Perspective. Springer, Berlin (2013)

    Book  Google Scholar 

  21. Jansen, T., De Jong, K.A., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evol. Comput. 13, 413–440 (2005)

    Article  Google Scholar 

  22. Lehre, P.K.: Negative drift in populations. In: Parallel Problem Solving from Nature, PPSN 2010, pp. 244–253. Springer (2010)

  23. Lehre, P.K.: Fitness-levels for non-elitist populations. In: Genetic and Evolutionary Computation Conference, GECCO 2011, pp. 2075–2082. ACM (2011)

  24. Lengler, J.: A general dichotomy of evolutionary algorithms on monotone functions. In: Parallel Problem Solving from Nature, PPSN 2018, Part II, pp. 3–15. Springer (2018)

  25. Mühlenbein, H.: How genetic algorithms really work: mutation and hillclimbing. In: Parallel Problem Solving from Nature, PPSN 1992, pp. 15–26. Elsevier (1992)

  26. Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization: Algorithms and Their Computational Complexity. Springer, Berlin (2010)

    Book  Google Scholar 

  27. Pinto, E.C., Doerr, C.: Towards a more practice-aware runtime analysis of evolutionary algorithms. CoRR, arXiv:1812.00493 [abs] (2018)

  28. Prügel-Bennett, A.: When a genetic algorithm outperforms hill-climbing. Theoret. Comput. Sci. 320, 135–153 (2004)

    Article  MathSciNet  Google Scholar 

  29. Rowe, J.E., Sudholt, D.: The choice of the offspring population size in the (1, \(\lambda \)) evolutionary algorithm. Theoret. Comput. Sci. 545, 20–38 (2014)

    Article  MathSciNet  Google Scholar 

  30. Szu, H.H., Hartley, R.L.: Fast simulated annealing. Phys. Lett. A 122, 157–162 (1987)

    Article  Google Scholar 

  31. Teytaud, O., Gelly, S.: General lower bounds for evolutionary algorithms. In: Parallel Problem Solving from Nature, PPSN 2006, pp. 21–31. Springer (2006)

  32. Wald, A.: Some generalizations of the theory of cumulative sums of random variables. Ann. Math. Stat. 16, 287–293 (1945)

    Article  MathSciNet  Google Scholar 

  33. Witt, C.: Runtime analysis of the (\(\mu \) + 1) EA on simple pseudo-Boolean functions. Evol. Comput. 14, 65–86 (2006)

    Google Scholar 

  34. Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. 22, 294–318 (2013)

    Article  MathSciNet  Google Scholar 

  35. Yao, X., Liu, Y.: Fast evolution strategies. In: Evolutionary Programming, volume 1213 of Lecture Notes in Computer Science, pp. 151–162. Springer (1997)

  36. Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Trans. Evol. Comput. 3, 82–102 (1999)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH and by RFBR and CNRS, Project number 20-51-15009.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Denis Antipov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended version of the paper [1] in the proceedings of GECCO. This version contains all proofs and other details that had to be omitted in the conference version for reasons of space. Also, we have greatly expanded the experimental section.

Appendices

Appendix: Computation of Table 1

In this appendix we compute all estimates of the true progress probability \(p_{d(x)}\) shown in Table 1. We use the same expression for estimating \(p_{d(x)}\) as in Lemma 8, that by Lemma 7 is,

$$\begin{aligned} p_{d(x)}&= \sum _{\lambda = 1}^{u} C_{\beta , u} \lambda ^{-\beta } p_{d(x)}(\lambda ) \\&\ge {\left\{ \begin{array}{ll} C_{\beta , u} C \frac{d(x)}{n} \sum _{\lambda = 1}^{u} \lambda ^{2-\beta }, &{}\text {if}\quad u \le \sqrt{\frac{n}{d(x)}}, \\ C_{\beta , u} C \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } + C_{\beta , u} C \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-\beta }, &{}\text {else,} \end{array}\right. } \end{aligned}$$

where C is some constant. Recall that by Lemma 4 we have

  • If \(\beta < 0\), then \(C_{\beta , u} \ge u^{\beta - 1} \frac{1 - \beta }{2 - \beta }\),

  • If \(\beta \in [0, 1)\), then \(C_{\beta , u} \ge u^{\beta - 1} (1 - \beta )\),

  • If \(\beta = 1\), then \(C_{\beta , u} \ge \frac{1}{\ln (u) + 1}\), and

  • If \(\beta > 1\), then \(C_{\beta , u} \ge \frac{\beta - 1}{\beta }\).

Now we consider 11 cases depending on \(\beta \) and u. We start with the cases when \(u \le \sqrt{\frac{n}{d(x)}}\) and therefore estimate \(p_{d(x)}\) as

$$\begin{aligned} p_{d(x)}&\ge C_{\beta , u} C \frac{d(x)}{n} \sum _{\lambda = 1}^{u} \lambda ^{2-\beta }. \end{aligned}$$

Case 1 \(\beta < 0\), \(u \le \sqrt{\frac{n}{d(x)}}\).

By Lemma 3 we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda ^{2 - \beta } \\&\ge C \cdot u^{\beta - 1} \frac{1 - \beta }{2 - \beta } \cdot \frac{d(x)}{n} \cdot \frac{u^{3 - \beta } - 1}{3 - \beta } = \Omega \left( \frac{d(x)u^2}{n}\right) . \end{aligned}$$

Case 2 \(\beta \in [0, 1)\), \(u \le \sqrt{\frac{n}{d(x)}}\).

By Lemma 3 we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda ^{2 - \beta } \\&\ge C \cdot u^{\beta - 1} (1 - \beta ) \cdot \frac{d(x)}{n} \cdot \frac{u^{3 - \beta } - 1}{3 - \beta } = \Omega \left( \frac{d(x)u^2}{n}\right) , \end{aligned}$$

which is the same as in Case 1.

Case 3 \(\beta = 1\), \(u \le \sqrt{\frac{n}{d(x)}}\).

In this case we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda \\&\ge C \cdot \frac{1 }{\ln (u) + 1} \cdot \frac{d(x)}{n} \cdot \frac{u(u + 1)}{2} \\&= \Omega \left( \frac{d(x)u^2}{n\log (u)}\right) . \end{aligned}$$

Case 4 \(\beta \in (1, 3)\), \(u \le \sqrt{\frac{n}{d(x)}}\).

By Lemma 3 we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda ^{2 - \beta } \\&\ge C \cdot \frac{\beta - 1}{\beta } \cdot \frac{d(x)}{n} \cdot \frac{u^{3 - \beta } - 1}{3 - \beta } \\&= \Omega \left( \frac{d(x)u^{3 - \beta }}{n}\right) . \end{aligned}$$

Case 5 \(\beta = 3\), \(u \le \sqrt{\frac{n}{d(x)}}\).

By Lemma 3 we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda ^{-1} \\&\ge C \cdot \frac{2}{3} \cdot \frac{d(x)}{n} \cdot \ln (u) \\&= \Omega \left( \frac{d(x)\log (u)}{n}\right) . \end{aligned}$$

Case 6 \(\beta > 3\), \(u \le \sqrt{\frac{n}{d(x)}}\).

We have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{i = 1}^u \lambda ^{2 - \beta } \\&\ge C \cdot \frac{\beta - 1}{\beta } \cdot \frac{d(x)}{n} \cdot 1 \\&= \Omega \left( \frac{d(x)}{n}\right) . \end{aligned}$$

In the following cases we consider \(u > \sqrt{\frac{n}{d(x)}}\), hence we estimate \(p_{d(x)}\) as

$$\begin{aligned} p_{d(x)}&\ge C_{\beta , u} C \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } + C_{\beta , u} C \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-\beta } \\&= C C_{\beta , u} \left( \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } + \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-\beta }\right) . \end{aligned}$$

In all cases we first estimate the sums in the brackets and then put it into the inequality. Case 7 \(\beta < 1\), \(u > \sqrt{\frac{n}{d(x)}}\).

We consider three sub-cases.

  1. 1.

    When \(u \le 2\sqrt{\frac{n}{d(x)}} + 2\) and \(\sqrt{\frac{n}{d(x)}} \le 4\). In this case we also have \(u \le 2 \cdot 4 + 2 = 10\). Hence,

    $$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } \ge \frac{d(x)}{n} \ge \frac{1}{16} \ge \frac{u^{1 - \beta }}{16 \cdot 10^{1 - \beta }}. \end{aligned}$$
  2. 2.

    When \(u \le 2\sqrt{\frac{n}{d(x)}} + 2\) and \(\sqrt{\frac{n}{d(x)}} > 4\). In this case we have \(\sqrt{\frac{n}{d(x)}} \ge \frac{u}{2} - 1\). We also have that \(\sqrt{\frac{n}{d(x)}}^{3 - \beta } \ge 4^{3 - \beta } > 2^{4 - \beta }\) (therefore, \((\sqrt{\frac{n}{d(x)}} / 2)^{3 - \beta } > 2\)). Hence, by Lemma 3 we have

    $$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta }&\ge \frac{d(x)}{n} \sum _{\lambda = 1}^{\lceil \sqrt{\frac{n}{d(x)}} - 1 \rceil } \lambda ^{2-\beta } \ge \frac{d(x)}{n} \cdot \frac{\left( \sqrt{\frac{n}{d(x)}} - 1\right) ^{3 - \beta } - 1}{3 - \beta } \\&\ge \frac{d(x)}{n} \cdot \frac{\left( \sqrt{\frac{n}{d(x)}} / 2\right) ^{3 - \beta } - 1}{3 - \beta } \\&\ge \frac{d(x)}{n} \cdot \frac{\left( \sqrt{\frac{n}{d(x)}} / 2\right) ^{3 - \beta }}{2(3 - \beta )} \\&\ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \frac{1}{2^{4 - \beta }(3 - \beta )} \\&\ge \left( \frac{u}{2} - 1\right) ^{1 - \beta } \frac{1}{2^{4 - \beta }(3 - \beta )} \\&\ge \frac{u^{1 - \beta }}{2^{(6 - 3 \beta )}(3 - \beta )}. \end{aligned}$$
  3. 3.

    When \(u > 2\sqrt{\frac{n}{d(x)}} + 2\). In the same way as in Lemma 3 we estimate a sum via a corresponding integral.

    $$\begin{aligned} \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-\beta }&\ge \int _{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u x^{-\beta } dx \ge \int _{u/2}^u x^{-\beta } dx = u^{1 - \beta } \cdot \frac{1 - 2^{\beta - 1}}{1 - \beta }. \end{aligned}$$

Summing up all three cases we have that for each \(\beta < 1\) there exists a constant \(\gamma _1(\beta ) = \min \{\frac{1}{16 \cdot 10^{1 - \beta }}, \frac{1}{2^{(6 - 3 \beta )}(3 - \beta )}, \frac{1 - 2^{\beta - 1}}{1 - \beta }\}\) such that

$$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } + \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-\beta } \ge \gamma _1(\beta ) \cdot u^{1 - \beta }. \end{aligned}$$

If \(\beta < 0\), we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \gamma _1(\beta ) u^{1 - \beta } \ge C u^{\beta - 1} \frac{1 - \beta }{2 - \beta } \gamma _1(\beta ) u^{1 - \beta } = \Omega (1). \end{aligned}$$

If \(\beta \in [0, 1)\), we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \gamma _1(\beta ) u^{1 - \beta } \ge C u^{\beta - 1} (1 - \beta ) \gamma _1(\beta ) u^{1 - \beta } = \Omega (1). \end{aligned}$$

Case 8 \(\beta = 1\), \(u > \sqrt{\frac{n}{d(x)}}\). We aim at showing that

$$\begin{aligned} p_{d(x)} \ge C \cdot \left( \frac{1}{36\ln (u)} + \frac{\ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) }{36\ln (u)}\right) . \end{aligned}$$

Note that in this case we do not use asymptotic notation for estimating \(p_{d(x)}\) due to having terms of different signs in the bound above (and thus, the leading constants of these terms are important). However note that as long as u is by a constant times greater than \(\sqrt{\frac{n}{d(x)}}\), then the first term is dominant, therefore, this bound is \(\Omega (\frac{1}{\log (u)})\). If u is at least \(\phi \cdot \sqrt{\frac{n}{d(x)}}\) for some super-constant \(\phi \), then this bound is \(\Omega (\frac{\log (\phi )}{\log (u)})\).

In this case we have \(u > \sqrt{\frac{n}{d(x)}} \ge 1\), hence \(u \ge 2\). Therefore, by Lemma 4 we have

$$\begin{aligned} C_{1,u} \ge \frac{1}{1 + \ln (u)} = \frac{1}{\ln (u)} \cdot \frac{\ln (u)}{1 + \ln (u)} \ge \frac{1}{\ln (u)} \cdot \frac{\ln (2)}{\ln (2) + 1} > \frac{1}{3\ln (u)}. \end{aligned}$$

By the formula for a sum of arithmetic progression and estimating the second sum via a corresponding integral in the same way as in Lemma 3, we have

$$\begin{aligned}&\frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda + \sum _{\lambda = \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \lambda ^{-1} \\&\quad \ge \frac{d(x)}{n} \cdot \frac{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor \left( \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1\right) }{2} + \int _{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \frac{dx}{x} \end{aligned}$$

Since for all \(x \ge 1\) we have \(\frac{\lfloor x \rfloor }{x} \ge \frac{1}{2}\) and \(\frac{\lfloor x \rfloor + 1}{x} \ge 1\), we also have

$$\begin{aligned} \frac{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor \left( \lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1\right) }{2n/d(x)} \ge \frac{1}{4}. \end{aligned}$$

Now we consider two sub-cases. First, let \(u \le e^2 \sqrt{\frac{n}{d(x)}}\). Then we have

$$\begin{aligned} p_{d(x)} \ge C C_{1, u} \cdot \frac{1}{4} \ge \frac{C}{12\ln (u)}. \end{aligned}$$

Otherwise, if \(u > e^2 \sqrt{\frac{n}{d(x)}}\), then we estimate the integral by

$$\begin{aligned} \int _{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor + 1}^u \frac{dx}{x}&\ge \int _{\sqrt{\frac{n}{d(x)}} + 1}^u \frac{dx}{x} = \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}} + 1\right) \\&= \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) - \ln \left( 1 + \sqrt{\frac{d(x)}{n}}\right) \\&\ge \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) - \sqrt{\frac{d(x)}{n}} \\&\ge \frac{\ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) }{2} + \frac{\ln (u) - \ln \left( \frac{u}{e^2}\right) }{2} - 1 \\&= \frac{\ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) }{2} + 1 - 1. \end{aligned}$$

Hence, we conclude

$$\begin{aligned} p_{d(x)}&\ge C C_{1, u} \left( \frac{1}{4} + \frac{\ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) }{2}\right) \\&\ge C \cdot \frac{1 + 2\left( \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) \right) }{12\ln (u)}. \end{aligned}$$

We unite the two sub-cases with the following lower bound, which holds both for \(u \le e^2 \sqrt{\frac{n}{d(x)}}\) and for \(u > e^2 \sqrt{\frac{n}{d(x)}}\).

$$\begin{aligned} p_{d(x)}&\ge C \cdot \frac{1 + 2\left( \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) \right) }{36\ln (u)} \\&\ge C \cdot \frac{1 + \ln (u) - \ln \left( \sqrt{\frac{n}{d(x)}}\right) }{36\ln (u)}. \end{aligned}$$

Case 9 \(\beta \in (1, 3)\), \(u > \sqrt{\frac{n}{d(x)}}\).

We consider three sub-cases

  1. 1.

    When \(\beta \le 2\) and \(\sqrt{\frac{n}{d(x)}} \le 2\).

    $$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta }&\ge \frac{d(x)}{n} = \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \sqrt{\frac{n}{d(x)}}^{\beta - 3} \ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \left( \frac{1}{2}\right) ^{\beta - 3} \\&\ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \left( \frac{1}{2}\right) ^2 = \frac{1}{4}\sqrt{\frac{n}{d(x)}}^{1 - \beta }. \end{aligned}$$
  2. 2.

    When \(\beta > 2\) and \(\lfloor \sqrt{\frac{n}{d(x)}} \rfloor \le 2^{\frac{1}{3 - \beta }}\). In this case we also have \(\sqrt{\frac{n}{d(x)}} \le 2^{\frac{1}{3 - \beta }} + 1\). Hence, we have

    $$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta }&\ge \frac{d(x)}{n} = \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \sqrt{\frac{n}{d(x)}}^{\beta - 3} \\&\ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \left( 2^{\frac{1}{3 - \beta }} + 1\right) ^{\beta - 3} \\&\ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \cdot \left( 2^{\left( \frac{1}{3 - \beta } + 1\right) }\right) ^{\beta - 3} \\&= 2^{\beta - 4}\sqrt{\frac{n}{d(x)}}^{1 - \beta } \ge \frac{1}{4}\sqrt{\frac{n}{d(x)}}^{1 - \beta }. \end{aligned}$$
  3. 3.

    When \(\beta > 2\) and \(\lfloor \sqrt{\frac{n}{d(x)}} \rfloor \ge 2^{\frac{1}{3 - \beta }}\) or when \(\beta \le 2\) and \(\sqrt{\frac{n}{d(x)}} > 2\). In this case we have both \(\lfloor \sqrt{\frac{n}{d(x)}} \rfloor ^{3 - \beta } \ge 2\) and \(\sqrt{\frac{n}{d(x)}} \ge 2\). Hence, by Lemma 3 we have

    $$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta }&\ge \frac{d(x)}{n} \cdot \frac{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor ^{3 - \beta } - 1}{3 - \beta } \ge \frac{d(x)}{n} \cdot \frac{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor ^{3 - \beta }}{2(3 - \beta )}\\&\ge \frac{d(x)}{n} \cdot \frac{\left( \sqrt{\frac{n}{d(x)}} - 1 \right) ^{3 - \beta }}{2(3 - \beta )} \ge \frac{d(x)}{n} \cdot \frac{\left( \frac{1}{2}\sqrt{\frac{n}{d(x)}}\right) ^{3 - \beta }}{2(3 - \beta )} \\&\ge \sqrt{\frac{n}{d(x)}}^{1 - \beta } \frac{1}{2^{4 - \beta }(3 - \beta )}. \end{aligned}$$

Summing up all three cases we have that for each \(\beta \in (1, 3)\) there exists a constant \(\gamma _2(\beta ) = \min \{\frac{1}{4}, \frac{1}{2^{4 - \beta }(3 - \beta )}\}\) such that

$$\begin{aligned} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2-\beta } \ge \gamma _2(\beta ) \cdot \sqrt{\frac{n}{d(x)}}^{1 - \beta }. \end{aligned}$$

Taking into account that \(C_{\beta , u} \ge \frac{\beta - 1}{\beta }\), we obtain

$$\begin{aligned} p_{d(x)} \ge C C_{\beta , u} \gamma (\beta ) \sqrt{\frac{n}{d(x)}}^{1 - \beta } = \Omega \left( \sqrt{\frac{n}{d(x)}}^{1 - \beta }\right) . \end{aligned}$$

Case 10 \(\beta = 3\), \(u > \sqrt{\frac{n}{d(x)}}\).

If \(\sqrt{\frac{n}{d(x)}} \ge 2\), we compute

$$\begin{aligned} p_{d(x)}&\ge C C_{3, u} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{-1} \ge C \cdot \frac{2}{3} \cdot \frac{d(x)}{n} \ln \left( \lfloor \sqrt{\frac{n}{d(x)}} \rfloor \right) \\&= \Omega \left( \frac{\ln \left( \sqrt{\frac{n}{d(x)}}\right) }{n/d(x)}\right) . \end{aligned}$$

Otherwise,

$$\begin{aligned} p_{d(x)}&\ge C C_{3, u} \frac{d(x)}{n} = \Omega \left( \frac{1}{n/d(x)}\right) . \end{aligned}$$

Therefore,

$$\begin{aligned} p_{d(x)}&= \Omega \left( \frac{\ln \left( \sqrt{\frac{n}{d(x)}}\right) + 1}{n/d(x)}\right) . \end{aligned}$$

Case 11 \(\beta > 3\), \(u > \sqrt{\frac{n}{d(x)}}\).

In this case we have

$$\begin{aligned} p_{d(x)}&\ge C C_{\beta , u} \frac{d(x)}{n} \sum _{\lambda = 1}^{\lfloor \sqrt{\frac{n}{d(x)}} \rfloor } \lambda ^{2 - \beta } \\&\ge C \cdot \frac{d(x)}{n} \cdot \frac{\beta - 1}{\beta } \cdot 1 = \Omega \left( \frac{d(x)}{n}\right) . \end{aligned}$$

Appendix: Computation of Table 2

In this appendix we compute the values of the expected runtime shown in Table 2. We start with computing the expected runtimes in terms of iterations for each value of the algorithm’s meta-parameter \(\beta \). Recall that \(p_d\) is the probability to create a better offspring in one iteration, which is shown in Table 1. Hence, using the fitness levels argument we can estimate the expected number of iterations before we find the optimum as follows.

$$\begin{aligned} E[T_I] \le \sum _{d = 1}^{n} \frac{1}{p_d} = \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{1}{p_d} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{1}{p_d}. \end{aligned}$$

Note that in the first sum we have \(u \le \sqrt{\frac{n}{d}}\) (thus, we should use values for \(p_d\) from the left column of Table 1) and in the second sum we have \(u > \sqrt{\frac{n}{d}}\) (thus, we should use the estimates from the right column). Note that \(p_d = \Omega (f(n, d, u))\) in Table 1 means that for each \(\beta \) there exists a constant \(\gamma (\beta )\) (independent of n, d and u) such that \(p_d \ge \gamma (\beta ) \cdot f(n, d, u)\). We will use this constant in our further computations.

To estimate the expected runtime we consider five cases.

Case 1 \(\beta < 1\).

In this case we have

$$\begin{aligned} E[T_I]&\le \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{1}{p_d} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{1}{p_d} \\&\le \frac{1}{\gamma (\beta )} \left( \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{n}{du^2} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} 1 \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{u^2} \left( \ln \lfloor \frac{n}{u^2} \rfloor + 1\right) + n - \lfloor \frac{n}{u^2} \rfloor \right) \\&= O\left( \frac{n}{u^2}\ln \left( \frac{n}{u^2}\right) + n\right) , \end{aligned}$$

where we used the estimates for the sums from Lemma 4. Note that when \(u \ge \sqrt{\ln (n)}\), we have

$$\begin{aligned} \frac{n}{u^2}\ln \left( \frac{n}{u^2}\right) \le \frac{n}{\ln (n)} \ln (n) = O(n). \end{aligned}$$

Otherwise, we have

$$\begin{aligned} \frac{n}{u^2}\ln \left( \frac{n}{u^2}\right) \ge \frac{n}{\ln (n)} (\ln (n) - \ln \ln (n)) = \Omega (n). \end{aligned}$$

Therefore, we conclude

$$\begin{aligned} E[T_I] = {\left\{ \begin{array}{ll} O\left( \frac{n}{u^2}\ln \left( \frac{n}{u^2}\right) \right) , &{}\text {if}\quad u < \sqrt{\ln (n)}, \\ O(n), &{}\text {if}\quad u \ge \sqrt{\ln (n)}. \end{array}\right. } \end{aligned}$$

Case 2 \(\beta = 1\). In this case we have

$$\begin{aligned} E[T_I]&\le \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{1}{p_d} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{1}{p_d}\\&\le \frac{1}{\gamma (\beta )} \left( \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{n\ln (u)}{du^2} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{\ln (u)}{1 + \ln (u) - \ln (\sqrt{\frac{n}{d}})} \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n\ln (u)}{u^2} \left( \ln \lfloor \frac{n}{u^2} \rfloor + 1\right) + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{\lfloor \frac{n}{u} \rfloor } \ln (u) + \sum _{d = \lfloor \frac{n}{u} \rfloor + 1}^{n} \frac{\ln (u)}{1 + \frac{1}{2}\ln (u)} \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n\ln (u)}{u^2} \left( \ln \lfloor \frac{n}{u^2} \rfloor + 1\right) + \frac{n\ln (u)}{u} + n \cdot \frac{\ln (u)}{1 + \frac{1}{2}\ln (u)} \right) \\&= O\left( \frac{n\log (u)\log (\frac{n}{u^2})}{u^2} + \frac{n\log (u)}{u} + n\right) \end{aligned}$$

Note that \(\frac{n\ln (u)\ln (\frac{n}{u^2})}{u^2}\) is a decreasing function of u for all \(u \ge 1\), which can be shown by considering its derivative (we omit this tedious computation). Hence, if \(u < \sqrt{\ln (n)\ln \ln (n)}\), then we have

$$\begin{aligned} \frac{n\ln (u)\ln (\frac{n}{u^2})}{u^2}&\ge \frac{n(\ln \ln (n) + \ln \ln \ln (n))(\ln (n) - \ln \ln (n) - \ln \ln \ln (n))}{2\ln (n)\ln \ln (n)} \\&= \Omega (n). \end{aligned}$$

For such u we also have

$$\begin{aligned} \frac{n\ln (u)\ln (\frac{n}{u^2})}{u^2}&\ge \frac{n\ln (u)}{u} \cdot \frac{\ln (\frac{n}{u^2})}{u} \\&\ge \frac{n\ln (u)}{u} \cdot \frac{(\ln (n) - \ln \ln (n) - \ln \ln \ln (n))}{\sqrt{\ln (n)\ln \ln (n)}} \\&= \Omega \left( \frac{n\log (u)}{u}\sqrt{\frac{\log (n)}{\log \log (n)}}\right) = \Omega \left( \frac{n\log (u)}{u}\right) . \end{aligned}$$

If \(u \ge \sqrt{\ln (n)\ln \ln (n)}\), we have

$$\begin{aligned} \frac{n\ln (u)\ln (\frac{n}{u^2})}{u^2} \le \frac{n\ln \ln (n)(\ln (n) - \ln \ln (n) - \ln \ln \ln (n))}{2\ln (n)\ln \ln (n)} = O(n), \end{aligned}$$

and we have

$$\begin{aligned} \frac{n\ln (u)}{u} \le n = O(n). \end{aligned}$$

Hence, we conclude

$$\begin{aligned} E[T_I] = {\left\{ \begin{array}{ll} O\left( \frac{n\log (u)}{u^2}\log \left( \frac{n}{u^2}\right) \right) , &{}\text {if}\quad u < \sqrt{\ln (n)\ln \ln (n)}, \\ O(n), &{}\text {if}\quad u \ge \sqrt{\ln (n)\ln \ln (n)}. \end{array}\right. } \end{aligned}$$

Case 3 \(\beta \in (1, 3)\).

In this case we have

$$\begin{aligned} E[T_I]&\le \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{1}{p_d} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{1}{p_d}\\&\le \frac{1}{\gamma (\beta )} \left( \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{n}{du^{3 - \beta }} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \sqrt{\frac{n}{d}}^{\beta - 1} \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{u^{3 - \beta }}\left( \ln \left( \frac{n}{u^2}\right) + 1\right) + n^{(\beta - 1)/2}\sum _{d = 1}^{n - 1} d^{(1 - \beta )/2} \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{u^{3 - \beta }}\left( \ln \left( \frac{n}{u^2}\right) + 1\right) + n^{(\beta - 1)/2} \cdot \frac{n^{(3 - \beta )/2} - 1}{(3 - \beta )/2} \right) \\&= O\left( \frac{n}{u^{3 - \beta }}\log \left( \frac{n}{u^2}\right) + n\right) , \end{aligned}$$

where we used Lemma 4 to estimate the sums. When \(u < (\ln (n))^{1/(3 - \beta )}\), we have

$$\begin{aligned} \frac{n}{u^{3 - \beta }}\ln \left( \frac{n}{u^2}\right) \ge \frac{n(\ln (n) - \frac{2}{3 - \beta }\ln \ln (n))}{\ln (n)} = \Omega (n). \end{aligned}$$

Otherwise, we have

$$\begin{aligned} \frac{n}{u^{3 - \beta }}\ln \left( \frac{n}{u^2}\right) \le \frac{n\ln (n)}{\ln (n)} = n. \end{aligned}$$

Therefore, we have

$$\begin{aligned} E[T_I] = {\left\{ \begin{array}{ll} O\left( \frac{n}{u^{3 - \beta }}\log \left( \frac{n}{u^2}\right) \right) , &{}\text {if}\quad u < (\ln (n))^{1/(3 - \beta )}, \\ O(n), &{}\text {if}\quad u \ge (\ln (n))^{1/(3 - \beta )}. \end{array}\right. } \end{aligned}$$

Case 4 \(\beta = 3\). We compute

$$\begin{aligned} E[T_I]&\le \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{1}{p_d} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{1}{p_d}\\&\le \frac{1}{\gamma (\beta )} \left( \sum _{d = 1}^{\lfloor \frac{n}{u^2} \rfloor } \frac{n}{d\ln (u)} + \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 1}^{n} \frac{n}{d\left( \ln \left( \frac{n}{d}\right) + 1\right) } \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{\ln (u)}\left( \ln \left( \frac{n}{u^2}\right) + 1\right) + n + n \sum _{d = \lfloor \frac{n}{u^2} \rfloor + 2}^{n} \frac{1}{d\left( \ln \left( \frac{n}{d}\right) + 1\right) } \right) \\&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{\ln (u)}\left( \ln \left( \frac{n}{u^2}\right) + 1\right) + n + n \int _{n/u^2}^n \frac{dx}{x(\ln (n) - \ln (x) + 1)} \right) , \end{aligned}$$

where we used the fact that \(f(x) = \frac{1}{x(\ln (n) - \ln (x) + 1)}\) is a decreasing function in interval [1, n] to estimate the sum via a corresponding integral. We estimate the integral as follows.

$$\begin{aligned} \int _{n/u^2}^n \frac{dx}{x(\ln (n) - \ln (x) + 1)}&= - \int _{n/u^2}^n \frac{d(\ln (n) - \ln (x) + 1)}{(\ln (n) - \ln (x) + 1)} \\&= \ln ((\ln (n) - \ln (x) + 1))\bigg |_n^{n/u^2} \\&= \ln (2\ln (u) + 1) \end{aligned}$$

Therefore,

$$\begin{aligned} E[T_I]&\le \frac{1}{\gamma (\beta )} \left( \frac{n}{\ln (u)}\left( \ln \left( \frac{n}{u^2}\right) + 1\right) + n (\ln (2\ln (u) + 1) + 1) \right) \\&= O\left( \frac{n}{\log (u)}\log \left( \frac{n}{u^2}\right) + n\log \log (u)\right) \end{aligned}$$

Note that the first term is decreasing in u, while the second one is increasing. We show that they are asymptotically the same when \(u = n^{1/\ln \ln (n)}\).

$$\begin{aligned} \frac{n}{\ln (n^{1/\ln \ln (n)})}\ln \left( \frac{n}{n^{2/\ln \ln (n)}}\right)&= \frac{n \ln \ln (n)}{\ln (n)} \cdot \left( \ln (n) - \frac{2\ln (n)}{\ln \ln (n)}\right) \\&= \Theta (n \ln \ln (n)), \end{aligned}$$
$$\begin{aligned} n\ln \ln (n^{1/\ln \ln (n)})&= n\ln \frac{\ln (n)}{\ln \ln (n)} = n\ln \ln (n) - n\ln \ln \ln (n) \\&= \Theta (n \ln \ln (n)). \end{aligned}$$

Therefore, when \(u \le n^{1/\ln \ln (n)}\), the first term is dominant, otherwise the second term is dominant. Hence, we conclude

$$\begin{aligned} E[T_I] = {\left\{ \begin{array}{ll} O\left( \frac{n}{\log (u)}\log \left( \frac{n}{u^2}\right) \right) , &{}\text { if } u < n^{1/\ln \ln (n)}, \\ O(n\log \log (u)), &{}\text { if } u \ge n^{1/\ln \ln (n)}. \end{array}\right. } \end{aligned}$$

Case 5 \(\beta > 3\).

In this case we have

$$\begin{aligned} E[T_I]&\le \sum _{d = 1}^{n} \frac{1}{p_d} \le \frac{1}{\gamma (\beta )} \sum _{d = 1}^n \frac{n}{d} \le \frac{n(\ln (n) + 1)}{\gamma (\beta )} = O(n\log (n))\\ \end{aligned}$$

We complete the computation of the right column of Table 2 by using Wald’s equation (Lemma 1) and estimates of the expected cost of each iteration shown in Lemma 9.

Case 1 \(\beta < 1\).

If \(u \ge \sqrt{\ln (n)}\), then

$$\begin{aligned} E[T_F] = O(n) \cdot \Theta (u) = O(nu). \end{aligned}$$

If \(u < \sqrt{\ln (n)}\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n}{u^2}\log \frac{n}{u^2}\right) \cdot \Theta (u) = O\left( \frac{n}{u}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 2 \(\beta = 1\).

If \(u \ge \sqrt{\ln (n)\ln \ln (n)}\), then

$$\begin{aligned} E[T_F] = O(n) \cdot \Theta \left( \frac{u}{\log (u)}\right) = O\left( \frac{nu}{\log (u)}\right) . \end{aligned}$$

If \(u < \sqrt{\ln (n)\ln \ln (n)}\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n\log (u)}{u^2}\log \frac{n}{u^2}\right) \cdot \Theta \left( \frac{u}{\log (u)}\right) = O\left( \frac{n}{u}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 3 \(\beta \in (1, 2)\).

If \(u \ge (\ln (n))^{1/(3 - \beta )}\), then

$$\begin{aligned} E[T_F] = O(n) \cdot \Theta (u^{2 - \beta }) = O(nu^{2 - \beta }). \end{aligned}$$

If \(u < (\ln (n))^{1/(3 - \beta )}\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n}{u^{3 - \beta }}\log \frac{n}{u^2}\right) \cdot \Theta (u^{2 - \beta }) = O\left( \frac{n}{u}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 4 \(\beta = 2\).

If \(u \ge \ln (n)\), then

$$\begin{aligned} E[T_F] = O(n) \cdot \Theta (\log (u)) = O(n\log (u)). \end{aligned}$$

If \(u < \ln (n)\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n}{u}\log \frac{n}{u^2}\right) \cdot \Theta (\log (u)) = O\left( \frac{n\log (u)}{u}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 5 \(\beta \in (2, 3)\).

If \(u \ge (\ln (n))^{1/(3 - \beta )}\), then

$$\begin{aligned} E[T_F] = O(n) \cdot \Theta (1) = O(n). \end{aligned}$$

If \(u < (\ln (n))^{1/(3 - \beta )}\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n}{u^{3 - \beta }}\log \frac{n}{u^2}\right) \cdot \Theta (1) = O\left( \frac{n}{u^{3 - \beta }}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 6 \(\beta = 3\).

If \(u \ge n^{1/\ln \ln (n)}\), then

$$\begin{aligned} E[T_F] = O(n\log \log (u)) \cdot \Theta (1) = O(n\log \log (u)). \end{aligned}$$

If \(u < n^{1/\ln \ln (n)}\), then

$$\begin{aligned} E[T_F] = O\left( \frac{n}{\log (u)}\log \frac{n}{u^2}\right) \cdot \Theta (1) = O\left( \frac{n}{\log (u)}\log \frac{n}{u^2}\right) . \end{aligned}$$

Case 7 \(\beta > 3\).

For all u we have

$$\begin{aligned} E[T_F] = O(n\log (n)) \cdot \Theta (1) = O(n\log (n)). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Antipov, D., Buzdalov, M. & Doerr, B. Fast Mutation in Crossover-Based Algorithms. Algorithmica 84, 1724–1761 (2022). https://doi.org/10.1007/s00453-022-00957-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-022-00957-5

Navigation