Skip to main content
Log in

Self-Adjusting Evolutionary Algorithms for Multimodal Optimization

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps. We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting schemes). Added to a simple (1+1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1+\(\lambda \)) EA. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Aleti, A., Moser, I.: A systematic literature review of adaptive parameter control methods for evolutionary algorithms. ACM Comput. Surv. 49(3), 1–35 (2016)

    Article  Google Scholar 

  2. Antipov, D., Doerr, B.: Runtime analysis of a heavy-tailed (1+\((\lambda ,\lambda )\)) genetic algorithm on jump functions. In: T. Bäck, M. Preuss, A.H. Deutz, H. Wang, C. Doerr, M.T.M. Emmerich, H. Trautmann (eds.) Proceedinsgs of PPSN ’20, Lecture Notes in Computer Science, Vol. 12270, pp. 545–559. Springer (2020)

  3. Antipov, D., Doerr, B., Karavaev, V.: A tight runtime analysis for the (1 + (\(\lambda , \lambda \))) GA on LeadingOnes. In: Proceedings of FOGA ’19, pp. 169–182. ACM Press (2019)

  4. Antipov, D., Doerr, B., Karavaev, V.: The \((1 + (\lambda , \lambda ))\) GA is even faster on multimodal problems. In: Proceedings of GECCO ’20, pp. 1259–1267. ACM Press (2020)

  5. Auger, A., Doerr, B. (eds.): Theory of Randomized Search Heuristics. World Scientific Publishing (2011)

  6. Burke, E.K., Gendreau, M., Hyde, M.R., Kendall, G., Ochoa, G., Özcan, E., Qu, R.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64, 1695–1724 (2013)

    Article  Google Scholar 

  7. Buzdalov, M., Doerr, B., Kever, M.: The unrestricted black-box complexity of jump functions. Evol. Comput. 24(4), 719–744 (2016)

    Article  Google Scholar 

  8. Corus, D., Oliveto, P.S., Yazdani, D.: Fast artificial immune systems. In: Proceedings of PPSN ’18, pp. 67–78. Springer (2018)

  9. Dang, D.C., Lehre, P.K.: Self-adaptation of mutation rates in non-elitist populations. In: Proc. of PPSN ’16, pp. 803–813. Springer (2016)

  10. Doerr, B.: A tight runtime analysis for the cGA on jump functions: EDAs can cross fitness valleys at no extra cost. In: Proceedings of GECCO ’19, pp. 1488–1496. ACM Press (2019)

  11. Doerr, B., Doerr, C.: Optimal static and self-adjusting parameter choices for the (1+(\(\lambda \), \(\lambda \))) genetic algorithm. Algorithmica 80(5), 1658–1709 (2018)

    Article  MathSciNet  Google Scholar 

  12. Doerr, B., Doerr, C.: Theory of parameter control for discrete black-box optimization: Provable performance gains through dynamic parameter choices. In: B. Doerr, F. Neumann (eds.) Theory of Evolutionary Computation—Recent Developments in Discrete Optimization, pp. 271–321. Springer (2020)

  13. Doerr, B., Doerr, C., Kötzing, T.: Static and self-adjusting mutation strengths for multi-valued decision variables. Algorithmica 80(5), 1732–1768 (2018)

    Article  MathSciNet  Google Scholar 

  14. Doerr, B., Doerr, C., Yang, J.: k-bit mutation with self-adjusting k outperforms standard bit mutation. In: J. Handl, E. Hart, P.R. Lewis, M. López-Ibáñez, G. Ochoa, B. Paechter (eds.) Proceedings of PPSN 2016, Lecture Notes in Computer Science, vol. 9921, pp. 824–834. Springer (2016)

  15. Doerr, B., Fouz, M., Witt, C.: Quasirandom evolutionary algorithms. In: Proceedings of GECCO ’10, pp. 1457–1464. ACM Press (2010)

  16. Doerr, B., Gießen, C., Witt, C., Yang, J.: The (1 + \(\lambda \)) evolutionary algorithm with self-adjusting mutation rate. Algorithmica 81(2), 593–631 (2019)

    Article  MathSciNet  Google Scholar 

  17. Doerr, B., Krejca, M.S.: Significance-based estimation-of-distribution algorithms. In: Proceedings of GECCO ’18, pp. 1483–1490. ACM Press (2018)

  18. Doerr, B., Le, H.P., Makhmara, R., Nguyen, T.D.: Fast genetic algorithms. In: Proceedings of GECCO ’17, pp. 777–784. ACM Press (2017)

  19. Doerr, B., Le, H.P., Makhmara, R., Nguyen, T.D.: Fast genetic algorithms. CoRR arXiv:1703.03334 (2017)

  20. Doerr, B., Neumann, F. (eds.): Theory of Evolutionary Computation - Recent Developments in Discrete Optimization. Springer, Natural Computing Series (2020)

  21. Doerr, B., Witt, C., Yang, J.: Runtime analysis for self-adaptive mutation rates. In: Proceedings of GECCO ’18, pp. 1475–1482. ACM Press (2018)

  22. Doerr, B., Zheng, W.: Theoretical analyses of multi-objective evolutionary algorithms on multi-modal objectives. In: Proceedings of AAAI ’21, pp. 12293–12301. AAAI Press (2021)

  23. Doerr, C., Wagner, M.: Sensitivity of parameter control mechanisms with respect to their initialization. In: Proceedings of PPSN ’18, pp. 360–372. Springer (2018)

  24. Doerr, C., Ye, F., van Rijn, S., Wang, H., Bäck, T.: Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics: Profiling (1+\(\lambda \)) EA variants on OneMax and LeadingOnes. In: Proceedings of GECCO ’18, pp. 951–958. ACM Press (2018)

  25. Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theoret. Comput. Sci. 276, 51–81 (2002)

    Article  MathSciNet  Google Scholar 

  26. Eiben, A.E., Marchiori, E., Valkó, V.A.: Evolutionary algorithms with on-the-fly population size adjustment. In: Proceedings of PPSN ’04, pp. 41–50. Springer (2004)

  27. Fajardo, M.A.H.: An empirical evaluation of success-based parameter control mechanisms for evolutionary algorithms. In: Proceedings of GECCO ’19, pp. 787–795. ACM Press (2019)

  28. Friedrich, T., Quinzan, F., Wagner, M.: Escaping large deceptive basins of attraction with heavy-tailed mutation operators. In: Proceedings of GECCO ’18, pp. 293–300. ACM Press (2018)

  29. Garnier, J., Kallel, L., Schoenauer, M.: Rigorous hitting times for binary mutations. Evol. Comput. 7, 173–203 (1999)

    Article  Google Scholar 

  30. Hajek, B.: Hitting and occupation time bounds implied by drift analysis with applications. Adv. Appl. Probab. 14, 502–525 (1982)

    Article  MathSciNet  Google Scholar 

  31. Hansen, P., Mladenovic, N.: Variable neighborhood search. In: R. Martí, P.M. Pardalos, M.G.C. Resende (eds.) Handbook of Heuristics, pp. 759–787. Springer (2018)

  32. Jansen, T.: Analyzing Evolutionary Algorithms—The Computer Science Perspective. Springer (2013)

  33. Jansen, T., Jong, K.A.D., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evol. Comput. 13, 413–440 (2005)

    Article  Google Scholar 

  34. Jansen, T., Wiegand, R.P.: The cooperative coevolutionary (1+1) EA. Evol. Comput. 12(4), 405–434 (2004)

    Article  Google Scholar 

  35. Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proceedings of FOGA ’11, pp. 181–192. ACM Press (2011)

  36. Lengler, J.: A general dichotomy of evolutionary algorithms on monotone functions. In: Proceedings of PPSN ’18, pp. 3–15. Springer (2018)

  37. Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: Simple hyper-heuristics control the neighbourhood size of randomised local search optimally for leadingones. Evolutionary Computation (2020). In print

  38. Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization—Algorithms and Their Computational Complexity. Springer (2010)

  39. Rajabi, A., Witt, C.: Self-adjusting evolutionary algorithms for multimodal optimization. In: Proceedings of GECCO ’20, pp. 1314–1322. ACM Press (2020)

  40. Rajabi, A., Witt, C.: Stagnation detection with randomized local search. In: Proceedings of EvoCOP ’21, pp. 152–168. Springer (2021)

  41. Rodionova, A., Antonov, K., Buzdalova, A., Doerr, C.: Offspring population size matters when comparing evolutionary algorithms with self-adjusting mutation rates. In: Proceedings of GECCO ’19, pp. 855–863. ACM Press (2019)

  42. Rohlfshagen, P., Lehre, P.K., Yao, X.: Dynamic evolutionary optimisation: an analysis of frequency and magnitude of change. In: Proceedings of GECCO ’09, pp. 1713–1720. ACM Press (2009)

  43. Rowe, J.E., Aishwaryaprajna: The benefits and limitations of voting mechanisms in evolutionary optimisation. In: Proceedings of FOGA ’19, pp. 34–42. ACM Press (2019)

  44. Sudholt, D.: A new method for lower bounds on the running time of evolutionary algorithms. IEEE Trans. Evol. Comput. 17, 418–435 (2013)

    Article  Google Scholar 

  45. Wegener, I.: Methods for the analysis of evolutionary algorithms on pseudo-Boolean functions. In: R. Sarker, M. Mohammadian, X. Yao (eds.) Evolutionary Optimization. Kluwer Academic Publishers (2001)

  46. Whitley, D., Varadarajan, S., Hirsch, R., Mukhopadhyay, A.: Exploration and exploitation without mutation: Solving the jump function in \(\theta (n)\) time. In: Proceedings of PPSN ’18, pp. 55–66. Springer (2018)

  47. Witt, C.: Population size vs. runtime of a simple EA. In: Proceedings of CEC ’03, vol. 3, pp. 1996–2003. IEEE Press (2003)

  48. Witt, C.: Runtime analysis of the (\(\mu \)+1) EA on simple pseudo-boolean functions. Evol. Comput. 14(1), 65–86 (2006)

    Google Scholar 

  49. Witt, C.: Population size versus runtime of a simple evolutionary algorithm. Theoret. Comput. Sci. 403(1), 104–120 (2008)

    Article  MathSciNet  Google Scholar 

  50. Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. 22, 294–318 (2013)

    Article  MathSciNet  Google Scholar 

  51. Ye, F., Doerr, C., Bäck, T.: Interpolating local and global search by controlling the variance of standard bit mutation. In: Proceedings of CEC ’19, pp. 2292–2299 (2019)

Download references

Acknowledgements

This work was supported by a grant by the Danish Council for Independent Research (DFF-FNU 8021–00260B).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amirhossein Rajabi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

An extended abstract of this work appeared in the proceedings of GECCO 2020 [39]. This article includes full proofs and improved running time bounds, in particular for the Jump function in the case of \(m=\varOmega (n)\). These improvements are based on an optimized parameter choice for the so-called counter threshold, see Sect. 2.1 for a detailed discussion.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rajabi, A., Witt, C. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 84, 1694–1723 (2022). https://doi.org/10.1007/s00453-022-00933-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-022-00933-z

Keywords

Navigation