Advertisement

A Rigorous Evaluation of Crossover and Mutation in Genetic Programming

  • David R. White
  • Simon Poulding
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5481)

Abstract

The role of crossover and mutation in Genetic Programming (GP) has been the subject of much debate since the emergence of the field. In this paper, we contribute new empirical evidence to this argument using a rigorous and principled experimental method applied to six problems common in the GP literature. The approach tunes the algorithm parameters to enable a fair and objective comparison of two different GP algorithms, the first using a combination of crossover and reproduction, and secondly using a combination of mutation and reproduction. We find that crossover does not significantly outperform mutation on most of the problems examined. In addition, we demonstrate that the use of a straightforward Design of Experiments methodology is effective at tuning GP algorithm parameters.

Keywords

Response Surface Methodology Genetic Programming Problem Instance Central Composite Design Rigorous Evaluation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Luke, S., Spector, L.: A comparison of crossover and mutation in genetic programming. In: Genetic Programming 1997: Proceedings of the Second Annual Conference, pp. 240–248. Morgan Kaufmann, San Francisco (1997)Google Scholar
  2. 2.
    Luke, S., Spector, L.: A revised comparison of crossover and mutation in genetic programming. In: Genetic Programming 1998: Proceedings of the Third Annual Conference, pp. 208–213. Morgan Kaufmann, San Francisco (1998)Google Scholar
  3. 3.
    Angeline, P.J.: Comparing subtree crossover with macromutation. In: Angeline, P.J., McDonnell, J.R., Reynolds, R.G., Eberhart, R. (eds.) EP 1997. LNCS, vol. 1213, pp. 101–112. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  4. 4.
    Montgomery, D.C.: Design and Analysis of Experiments, 6th edn. John Wiley & Sons, Inc., Chichester (2005)zbMATHGoogle Scholar
  5. 5.
    Johnson, D.S.: A theoretician’s guide to the experimental analysis of algorithms. In: Goldwasser, M.H., Johnson, D.S., McGeoch, C.C. (eds.) Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, pp. 215–250. American Mathematical Society, Providence (2002)Google Scholar
  6. 6.
    Online Experiment Source Code and Scripts, http://www.cs.york.ac.uk/~drw/papers/eurogp2009/
  7. 7.
    Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  8. 8.
    Feldt, R., Nordin, P.: Using factorial experiments to evaluate the effect of genetic programming parameters. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 271–282. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Coy, S.P., Golden, B.L., Runger, G.C., Wasil, E.A.: Using experimental design to find effective parameter settings for heuristics. Journal of Heuristics 7(1), 77–97 (2001)CrossRefzbMATHGoogle Scholar
  10. 10.
  11. 11.
    Koza, J.R.: Genetic Programming II: Automatic Discovery of Reusable Programs. MIT Press, Cambridge (1994)zbMATHGoogle Scholar
  12. 12.
    Myers, R.H., Montgomery, D.C.: Response surface methodology: process and product optimization using designed experiments. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., Chichester (2005)zbMATHGoogle Scholar
  13. 13.
    Vadde, K.K., Syrotiuk, V.R., Montgomery, D.C.: Optimizing protocol interaction using response surface methodology. IEEE Trans. Mob. Comput. 5(6), 627–639 (2006)CrossRefGoogle Scholar
  14. 14.
    Leech, N.L., Onwuegbuzie, A.J.: A call for greater use of nonparametric statistics. Technical report, US Dept. Education, Educational Resources Information Center (2002)Google Scholar
  15. 15.
    Wilcoxon, F.: Individual comparisions by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945)CrossRefGoogle Scholar
  16. 16.
    Vargha, A., Delaney, H.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. J. Educational and Behavioral Statistics 25(2), 101–132 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • David R. White
    • 1
  • Simon Poulding
    • 1
  1. 1.Dept. of Computer ScienceUniversity of York, HeslingtonYorkUK

Personalised recommendations