Algorithmica

, Volume 80, Issue 5, pp 1732–1768 | Cite as

Static and Self-Adjusting Mutation Strengths for Multi-valued Decision Variables

Article
  • 86 Downloads
Part of the following topical collections:
  1. Special Issue on Genetic and Evolutionary Computation

Abstract

The most common representation in evolutionary computation are bit strings. With very little theoretical work existing on how to use evolutionary algorithms for decision variables taking more than two values, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over \(\varOmega = \{0, 1, \ldots , r-1\}^n\). We observe a crucial difference in how we extend the one-bit-flip and standard-bit mutation operators to the multi-valued domain. While it is natural to modify a random position of the string or select each position of the solution vector for modification independently with probability 1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\varTheta (nr \log n)\). If we change each selected position by \(+1\) or \(-1\) (random choice), the optimization time reduces to \(\varTheta (nr + n\log n)\). If we use a random mutation strength \(i \in \{0,1,\ldots ,r-1\}\) with probability inversely proportional to i and change the selected position by \(+i\) or \(-i\) (random choice), then the optimization time becomes \(\varTheta (n \log (r)(\log n +\log r))\), which is asymptotically faster than the previous if \(r = \omega (\log (n) \log \log (n))\). Interestingly, a better expected performance can be achieved with a self-adjusting mutation strength that is based on the success of previous iterations. For the mutation operator that modifies a randomly chosen position, we show that the self-adjusting mutation strength yields an expected optimization time of \(\varTheta (n (\log n + \log r))\), which is best possible among all dynamic mutation strengths. In our proofs, we use a new multiplicative drift theorem for computing lower bounds, which is not restricted to processes that move only towards the target.

Keywords

Theory of randomized search heuristics Runtime analysis Genetic algorithms Parameter choice Parameter control 

Notes

Acknowledgements

This work was supported by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Programme Gaspard Monge en Optimisation et Recherche Opérationnelle.

References

  1. 1.
    Auger, A., Doerr, B.: Theory of Randomized Search Heuristics. World Scientific, Singapore (2011)CrossRefMATHGoogle Scholar
  2. 2.
    Auger, A., Hansen, N.: Linear convergence on positively homogeneous functions of a comparison based step-size adaptive randomized search: the (1+1) ES with generalized one-fifth success rule. CoRR (2013). arXiv:1310.8397
  3. 3.
    Badkobeh, G., Lehre, P.K., Sudholt, D.: Unbiased black-box complexity of parallel search. In: Proceedings of Parallel Problem Solving from Nature (PPSN’14), Lecture Notes in Computer Science, vol. 8672, pp. 892–901. Springer (2014)Google Scholar
  4. 4.
    Böttcher, S., Doerr, B., Neumann, F.: Optimal fixed and adaptive mutation rates for the LeadingOnes problem. In: Proceedings of Parallel Problem Solving from Nature (PPSN’10), Lecture Notes in Computer Science, vol. 6238, pp. 1–10. Springer (2010)Google Scholar
  5. 5.
    Buzdalov, M., Doerr, B.: Runtime analysis of the \((1+(\lambda ,\lambda ))\) genetic algorithm on random satisfiable 3-CNF formulas. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’17). ACM (2017)Google Scholar
  6. 6.
    Dang, D., Lehre, P.K.: Self-adaptation of mutation rates in non-elitist populations. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 803–813. Springer (2016)Google Scholar
  7. 7.
    Dietzfelbinger, M., Rowe, J.E., Wegener, I., Woelfel, P.: Tight bounds for blind search on the integers and the reals. Comb. Probab. Comput. 19, 711–728 (2010)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Doerr, B.: Analyzing randomized search heuristics: tools from probability theory. In: Auger, A., Doerr, B. (eds.) Theory of Randomized Search Heuristics, pp. 1–20. World Scientific Publishing, Singapore (2011)Google Scholar
  9. 9.
    Doerr, B., Doerr, C.: The impact of random initialization on the runtime of randomized search heuristics. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1375–1382. ACM (2014)Google Scholar
  10. 10.
    Doerr, B., Doerr, C.: Optimal parameter choices through self-adjustment: applying the 1/5-th rule in discrete settings. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’15), pp. 1335–1342. ACM (2015)Google Scholar
  11. 11.
    Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theor. Comput. Sci. 567, 87–104 (2015)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Doerr, B., Doerr, C., Kötzing, T.: Provably optimal self-adjusting step sizes for multi-valued decision variables. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 782–791. Springer (2016)Google Scholar
  13. 13.
    Doerr, B., Doerr, C., Kötzing, T.: The right mutation strength for multi-valued decision variables. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1115–1122. ACM (2016)Google Scholar
  14. 14.
    Doerr, B., Doerr, C., Yang, J.: \(k\)-bit mutation with self-adjusting \(k\) outperforms standard bit mutation. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 824–834. Springer (2016)Google Scholar
  15. 15.
    Doerr, B., Doerr, C., Yang, J.: Optimal parameter choices via precise black-box analysis. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1123–1130. ACM (2016)Google Scholar
  16. 16.
    Doerr, B., Fouz, M., Witt, C.: Sharp bounds by probability-generating functions and variable drift. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’11), pp. 2083–2090. ACM (2011)Google Scholar
  17. 17.
    Doerr, B., Gießen, C., Witt, C., Yang, J.: The (1+\(\lambda \)) evolutionary algorithm with self-adjusting mutation rate. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’17). ACM (2017)Google Scholar
  18. 18.
    Doerr, B., Goldberg, L.A.: Adaptive drift analysis. Algorithmica 65, 224–250 (2013)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Doerr, B., Johannsen, D.: Adjacency list matchings: an ideal genotype for cycle covers. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’07), pp. 1203–1210. ACM (2007)Google Scholar
  20. 20.
    Doerr, B., Johannsen, D., Schmidt, M.: Runtime analysis of the (1+1) evolutionary algorithm on strings over finite alphabets. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 119–126. ACM (2011)Google Scholar
  21. 21.
    Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64, 673–697 (2012)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Doerr, B., Pohl, S.: Run-time analysis of the (1+1) evolutionary algorithm optimizing linear functions over a finite alphabet. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’12), pp. 1317–1324. ACM (2012)Google Scholar
  23. 23.
    Droste, S., Jansen, T., Wegener, I.: Upper and lower bounds for randomized search heuristics in black-box optimization. Theory Comput. Syst. 39, 525–544 (2006)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evolut. Comput. 3, 124–141 (1999)CrossRefGoogle Scholar
  25. 25.
    Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer, Berlin (2003)CrossRefMATHGoogle Scholar
  26. 26.
    Gunia, C.: On the analysis of the approximation capability of simple evolutionary algorithms for scheduling problems. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’05), pp. 571–578. ACM (2005)Google Scholar
  27. 27.
    Hansen, N., Gawelczyk, A., Ostermeier, A.: Sizing the population with respect to the local progress in (1,\( \lambda \))-evolution strategies—a theoretical analysis. In: Proceedings of IEEE Congress on Evolutionary Computation (CEC’95), pp. 80–85. IEEE (1995)Google Scholar
  28. 28.
    He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127, 57–85 (2001)MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Jägersküpper, J.: Rigorous runtime analysis of the (1+1) ES: 1/5-rule and ellipsoidal fitness landscapes. In: Proceedings of Foundations of Genetic Algorithms (FOGA’05), Lecture Notes in Computer Science, vol. 3469, pp. 260–281. Springer (2005)Google Scholar
  30. 30.
    Jägersküpper, J.: Oblivious randomized direct search for real-parameter optimization. In: Proceedings of European Symposium on Algorithms (ESA), Lecture Notes in Computer Science, vol. 5193, pp. 553–564. Springer (2008)Google Scholar
  31. 31.
    Jansen, T.: Analyzing Evolutionary Algorithms—The Computer Science Perspective. Springer, Berlin (2013)CrossRefMATHGoogle Scholar
  32. 32.
    Jansen, T., Wegener, I.: On the analysis of a dynamic evolutionary algorithm. J. Discrete Algorithms 4, 181–199 (2006)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Johannsen, D.: Random combinatorial structures and randomized search heuristics. Ph.D. thesis, Saarland University. http://scidok.sulb.uni-saarland.de/volltexte/2011/3529/ (2010)
  34. 34.
    Karafotias, G., Hoogendoorn, M., Eiben, A.: Parameter control in evolutionary algorithms: trends and challenges. IEEE Trans. Evolut. Comput. 19, 167–187 (2015)CrossRefGoogle Scholar
  35. 35.
    Kötzing, T., Lissovoi, A., Witt, C.: (1+1) EA on generalized dynamic OneMax. In: Proceedings of Foundations of Genetic Algorithms (FOGA’15), pp. 40–51. ACM (2015)Google Scholar
  36. 36.
    Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 181–192. ACM (2011)Google Scholar
  37. 37.
    Lehre, P.K., Witt, C.: Black-box search by unbiased variation. Algorithmica 64, 623–642 (2012)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Lissovoi, A., Witt, C.: MMAS vs. population-based EA on a family of dynamic fitness functions. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1399–1406. ACM (2014)Google Scholar
  39. 39.
    Mitavskiy, B., Rowe, J., Cannings, C.: Theoretical analysis of local search strategies to optimize network communication subject to preserving the total number of links. Int. J. Intell. Comput. Cybern. 2, 243–284 (2009)MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization—Algorithms and Their Computational Complexity. Springer, Berlin (2010)MATHGoogle Scholar
  41. 41.
    Oliveto, P.S., Lehre, P.K., Neumann, F.: Theoretical analysis of rank-based mutation-combining exploration and exploitation. In: Proceedings of Congress on Evolutionary Computation (CEC’09), pp. 1455–1462. IEEE (2009)Google Scholar
  42. 42.
    Rothlauf, F.: Representations for Genetic and Evolutionary Algorithms, 2nd edn. Springer, Berlin (2006)MATHGoogle Scholar
  43. 43.
    Rudolph, G.: An evolutionary algorithm for integer programming. In: Proceedings of Parallel Problem Solving from Nature (PPSN’94), pp. 139–148. Springer (1994)Google Scholar
  44. 44.
    Scharnow, J., Tinnefeld, K., Wegener, I.: The analysis of evolutionary algorithms on sorting and shortest paths problems. J. Math. Model. Algorithms 3, 349–366 (2004)MathSciNetCrossRefMATHGoogle Scholar
  45. 45.
    Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. 22, 294–318 (2013)MathSciNetCrossRefMATHGoogle Scholar
  46. 46.
    Zarges, C.: Rigorous runtime analysis of inversely fitness proportional mutation rates. In: Proceedings of Parallel Problem Solving from Nature (PPSN’08), Lecture Notes in Computer Science, vol. 5199, pp. 112–122. Springer (2008)Google Scholar
  47. 47.
    Zarges, C.: On the utility of the population size for inversely fitness proportional mutation rates. In: Proceedings of Foundations of Genetic Algorithms (FOGA’09), pp. 39–46. ACM (2009)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.LIX - UMR 7161École PolytechniquePalaiseauFrance
  2. 2.CNRS, LIP6 UMR 7606Sorbonne Universités, UPMC Univ Paris 06ParisFrance
  3. 3.Hasso-Plattner-InstitutPotsdamGermany

Personalised recommendations