Advertisement

A Comparison of Fitness-Case Sampling Methods for Symbolic Regression with Genetic Programming

  • Yuliana Martínez
  • Leonardo Trujillo
  • Enrique Naredo
  • Pierrick Legrand
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 288)

Abstract

The canonical approach towards fitness evaluation in Genetic Programming (GP) is to use a static training set to determine fitness, based on a cost function averaged over all fitness-cases. However, motivated by different goals, researchers have recently proposed several techniques that focus selective pressure on a subset of fitness-cases at each generation. These approaches can be described as fitness-case sampling techniques, where the training set is sampled, in some way, to determine fitness. This paper shows a comprehensive evaluation of some of the most recent sampling methods, using benchmark and real-world problems for symbolic regression. The algorithms considered here are Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and a new sampling technique is proposed called Keep-Worst Interleaved Sampling (KW-IS). The algorithms are extensively evaluated based on test performance, overfitting and bloat. Results suggest that sampling techniques can improve performance compared with standard GP. While on synthetic benchmarks the difference is slight or none at all, on real-world problems the differences are substantial. Some of the best results were achieved by Lexicase Selection and Keep Worse-Interleaved Sampling. Results also show that on real-world problems overfitting correlates strongly with bloating. Furthermore, the sampling techniques provide efficiency, since they reduce the number of fitness-case evaluations required over an entire run.

Keywords

Fitness-Case Sampling Symbolic Regression Performance Evaluation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Doucette, J., Heywood, M.I.: Gp classification under imbalanced data sets: Active sub-sampling and auc approximation. In: O’Neill, M., Vanneschi, L., Gustafson, S., Esparcia Alcázar, A.I., De Falco, I., Della Cioppa, A., Tarantino, E. (eds.) EuroGP 2008. LNCS, vol. 4971, pp. 266–277. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in genetic programming. In: Davidor, Y., Männer, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS, vol. 866, pp. 312–321. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  3. 3.
    Giacobini, M., Tomassini, M., Vanneschi, L.: Limiting the number of fitness cases in genetic programming using statistics. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 371–380. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Gonçalves, I., Silva, S.: Experiments on controlling overfitting in genetic programming. In: 15th Portuguese Conference on Artificial Intelligence (EPIA 2011) (October 2011)Google Scholar
  5. 5.
    Gonçalves, I., Silva, S.: Balancing learning and overfitting in genetic programming with interleaved sampling of training data. In: Krawiec, K., Moraglio, A., Hu, T., Etaner-Uyar, A.Ş., Hu, B. (eds.) EuroGP 2013. LNCS, vol. 7831, pp. 73–84. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  6. 6.
    Harper, R.: Spatial co-evolution: Quicker, fitter and less bloated. In: Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference, GECCO 2012, pp. 759–766. ACM, New York (2012)Google Scholar
  7. 7.
    Klein, J., Spector, L.: Genetic programming with historically assessed hardness. In: Genetic Programming Theory and Practice VI. Genetic and Evolutionary Computation, pp. 1–14. Springer US (2009)Google Scholar
  8. 8.
    Lasarczyk, C.W.G., Dittrich, P.W.G., Banzhaf, W.W.G.: Dynamic subset selection based on a fitness case topology. Evol. Comput. 12(2), 223–242 (2004)CrossRefGoogle Scholar
  9. 9.
    Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: Proceedings of the Eleventh International Conference on Artificial Life, Cambridge, MA, ALIFE XI. MIT Press (2008)Google Scholar
  10. 10.
    Martínez, Y., Naredo, E., Trujillo, L., López, E.G.: Searching for novel regression functions. In: IEEE Congress on Evolutionary Computation, pp. 16–23 (2013)Google Scholar
  11. 11.
    McDermott, J., White, D.R., Luke, S., Manzoni, L., Castelli, M., Vanneschi, L., Jaskowski, W., Krawiec, K., Harper, R., De Jong, K., O’Reilly, U.-M.: Genetic programming needs better benchmarks. In: Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference, GECCO 2012, pp. 791–798. ACM, New York (2012)Google Scholar
  12. 12.
    Schmidt, M., Lipson, H.: Coevolving fitness models for accelerating evolution and reducing evaluations. In: Riolo, R., Soule, T., Worzel, B. (eds.) Genetic Programming Theory and Practice IV. Genetic and Evolutionary Computation, pp. 113–130. Springer US (2007)Google Scholar
  13. 13.
    Silva, S., Almeida, J.: Gplab–a genetic programming toolbox for matlab. In: Gregersen, L. (ed.) Proceedings of the Nordic MATLAB Conference, pp. 273–278 (2003)Google Scholar
  14. 14.
    Spector, L.: Assessment of problem modality by differential performance of lexicase selection in genetic programming: a preliminary report. In: Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference Companion, GECCO Companion 2012, pp. 401–408. ACM (2012)Google Scholar
  15. 15.
    Trujillo, L., Spector, L., Naredo, E., Martínez, Y.: A behavior-based analysis of modal problems. In: GECCO (Companion), pp. 1047–1054 (2013)Google Scholar
  16. 16.
    Uy, N.Q., Hoai, N.X., O’Neill, M., Mckay, R.I., Galván-López, E.: Semantically-based crossover in genetic programming: application to real-valued symbolic regression. Genetic Programming and Evolvable Machines 12(2), 91–119 (2011)CrossRefGoogle Scholar
  17. 17.
    Vanneschi, L., Castelli, M., Silva, S.: Measuring bloat, overfitting and functional complexity in genetic programming. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO 2010, pp. 877–884. ACM, New York (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Yuliana Martínez
    • 1
  • Leonardo Trujillo
    • 1
  • Enrique Naredo
    • 1
  • Pierrick Legrand
    • 2
    • 3
  1. 1.TREE-LAB, Doctorado en Ciencias de la Ingeniería, Departamento de Ingeniería Eléctrica y ElectrónicaInstituto Tecnológico de TijuanaTijuanaMéxico
  2. 2.UMR CNRS 5251Université Victor Segalen Bordeaux 2 and The Institut de Mathmatiques de BordeauxBordeauxFrance
  3. 3.ALEA TeamINRIA Bordeaux Sud-OuestTalenceFrance

Personalised recommendations