Advertisement

Let’s Get Ready to Rumble: Crossover Versus Mutation Head to Head

  • Kumara Sastry
  • David E. Goldberg
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3103)

Abstract

This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BB-wise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speed-up of using BB-wise mutation on deterministic problems is \({\mathcal{O}}(\sqrt{k}\log m)\), where k is the BB size, and m is the number of BBs. Likewise, the speed-up of using crossover on stochastic problems with fixed noise variance is \({\mathcal{O}}(m\sqrt{k}/\log m)\).

Keywords

Genetic Algorithm Mutation Operator Separable Problem Additive Gaussian Noise Parallel Genetic Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barnes, J.W., Dimova, B., Dokov, S.P.: The theory of elementary landscapes. Applied Mathematical Letters 16, 337–343 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Watson, J.P.: Empirical Modeling and Analysis of Local Search Algorithms For The Job-Shop Scheduling Problem. PhD thesis, Colorado State University, Fort Collins, CO (2003)Google Scholar
  3. 3.
    Mühlenbein, H.: Evolution in time and space- the parallel genetic algorithm. Foundations of Genetic Algorithms, 316–337 (1991)Google Scholar
  4. 4.
    Mühlenbein, H.: How genetic algorithms really work: Mutation and hillclimbing. Parallel Problem Solving from Nature II, 15–26 (1992)Google Scholar
  5. 5.
    Mitchell, M., Holland, J., Forrest, S.: When will a genetic algorithm outperform hill-climbing. Advances in Neural Information Processing Systems 6, 51–58 (1994)Google Scholar
  6. 6.
    Baum, E.B., Boneh, D., Garrett, C.: Where genetic algorithms excel. Evolutionary Computation 9, 93–124 (2001)CrossRefGoogle Scholar
  7. 7.
    Spears, W.M.: Crossover or mutation. Foundations of Genetic Algorithms 2, 221–237 (1993)Google Scholar
  8. 8.
    Goldberg, D.E.: Using time efficiently: Genetic-evolutionary algorithms and the continuation problem. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 212–219 (1999) (Also IlliGAL Report No. 99002)Google Scholar
  9. 9.
    Srivastava, R., Goldberg, D.E.: Verification of the theory of genetic and evolutionary continuation. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 551–558 (2001) (Also IlliGAL Report No. 2001007)Google Scholar
  10. 10.
    Srivastava, R.: Time continuation in genetic algorithms. Master’s thesis, University of Illinois at Urbana-Champaign, General Engineering Department, Urbana, IL (2002) (Also IlliGAL Report No. 2001021) Google Scholar
  11. 11.
    Cantú-Paz, E., Goldberg, D.E.: Are multiple runs of genetic algorithms better than one? In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 801–812 (2003)Google Scholar
  12. 12.
    Cantú-Paz, E.: Efficient and accurate parallel genetic algorithms. Kluwer Academic Pub., Boston (2000)zbMATHGoogle Scholar
  13. 13.
    Luke, S.: When short runs beat long runs. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 74–80 (2001)Google Scholar
  14. 14.
    Fuchs, M.: Large populations are not always the best choice in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1033–1038 (1999)Google Scholar
  15. 15.
    Goldberg, D.E.: Design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, Boston (2002)zbMATHGoogle Scholar
  16. 16.
    Sastry, K., Goldberg, D.: Designing competent mutation operators via probabilistic model building of neighborhoods. In: Proceedings of the Genetic and Evolutionary Computation Conference (2004) (to appear) (Also IlliGAL Report No. 2004006)Google Scholar
  17. 17.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)Google Scholar
  18. 18.
    Goldberg, D.E.: Genetic Algorithms in Search Optimization and Machine Learning. Addison-Wesley, Reading (1989)zbMATHGoogle Scholar
  19. 19.
    Goldberg, D.E., Korb, B., Deb, K.: Messy genetic algorithms: Motivation, analysis, and first results. Complex Systems 3, 493–530 (1989) (Also IlliGAL Report No. 89003)Google Scholar
  20. 20.
    Thierens, D., Goldberg, D.E.: Convergence models of genetic algorithm selection schemes. Parallel Problem Solving from Nature 3, 116–121 (1994)Google Scholar
  21. 21.
    Goldberg, D.E., Deb, K., Clark, J.H.: Genetic algorithms, noise, and the sizing of populations. Complex Systems 6, 333–362 (1992) (Also IlliGAL Report No. 91010)Google Scholar
  22. 22.
    Harik, G., Cantú-Paz, E., Goldberg, D.E., Miller, B.L.: The gambler’s ruin problem, genetic algorithms, and the sizing of populations. Evolutionary Computation 7, 231–253 (1999) (Also IlliGAL Report No. 96004)Google Scholar
  23. 23.
    Mühlenbein, H., Schlierkamp-Voosen, D.: Predictive models for the breeder genetic algorithm: I. continuous parameter optimization. Evolutionary Computation 1, 25–49 (1993)CrossRefGoogle Scholar
  24. 24.
    Bulmer, M.G.: The Mathematical Theory of Quantitative Genetics. Oxford University Press, Oxford (1985)Google Scholar
  25. 25.
    Bäck, T.: Selective pressure in evolutionary algorithms: A characterization of selection mechanisms. In: Proceedings of the First IEEE Conference on Evolutionary Computation, pp. 57–62 (1994)Google Scholar
  26. 26.
    Miller, B.L., Goldberg, D.E.: Genetic algorithms, tournament selection, and the effects of noise. Complex Systems 9, 193–212 (1995) (Also IlliGAL Report No. 95006)Google Scholar
  27. 27.
    Bäck, T.: Generalized convergence models for tournament—and (μ, λ)—selection. In: Proceedings of the Sixth International Conference on Genetic Algorithms, pp. 2–8 (1995)Google Scholar
  28. 28.
    Miller, B.L.: Noise, Sampling, and Efficient Genetic Algorithms. PhD thesis, University of Illinois at Urbana-Champaign, General Engineering Department, Urbana, IL (1997) (Also IlliGAL Report No. 97001) Google Scholar
  29. 29.
    Goldberg, D.E.: Simple genetic algorithms and the minimal, deceptive problem. In: Davis, L. (ed.) Genetic algorithms and simulated annealing, pp. 74–88. Morgan Kaufmann, Los Altos (1987)Google Scholar
  30. 30.
    Sastry, K.: Evaluation-relaxation schemes for genetic and evolutionary algorithms. Master’s thesis, University of Illinois at Urbana-Champaign, General Engineering Department, Urbana, IL (2001) (Also IlliGAL Report No. 2002004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Kumara Sastry
    • 1
    • 2
  • David E. Goldberg
    • 1
    • 3
  1. 1.Illinois Genetic Algorithms Laboratory (IlliGAL) 
  2. 2.Department of Material Science & Engineering 
  3. 3.Department of General EngineeringUniversity of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations