Genetic Programming and Evolvable Machines

, Volume 14, Issue 1, pp 3–29

Better GP benchmarks: community survey results and proposals

  • David R. White
  • James McDermott
  • Mauro Castelli
  • Luca Manzoni
  • Brian W. Goldman
  • Gabriel Kronberger
  • Wojciech Jaśkowski
  • Una-May O’Reilly
  • Sean Luke
Article

Abstract

We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a “blacklist” of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems.

Keywords

Genetic programming Benchmarks Community survey 

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  • David R. White
    • 1
  • James McDermott
    • 2
  • Mauro Castelli
    • 3
  • Luca Manzoni
    • 4
  • Brian W. Goldman
    • 5
  • Gabriel Kronberger
    • 6
  • Wojciech Jaśkowski
    • 7
  • Una-May O’Reilly
    • 8
  • Sean Luke
    • 9
  1. 1.School of Computing ScienceUniversity of GlasgowGlasgowUK
  2. 2.School of BusinessUniversity College DublinDublinIreland
  3. 3.Instituto Superior de Estatística e Gestão de Informação (ISEGI)Universidade Nova de LisboaLisbonPortugal
  4. 4.Dipartimento di Informatica, Sistemistica e ComunicazioneUniversity of Milano-BicoccaMilanItaly
  5. 5.BEACON Center for the Study of Evolution in ActionMichigan State UniversityEast LansingUSA
  6. 6.University of Applied Sciences Upper AustriaLinzAustria
  7. 7.Institute of Computing SciencePoznan University of TechnologyPoznanPoland
  8. 8.CSAILMassachusetts Institute of TechnologyCambridgeUSA
  9. 9.Department of Computer ScienceGeorge Mason UniversityFairfaxUSA

Personalised recommendations