The Evolutionary Buffet Method

  • Arend HintzeEmail author
  • Jory Schossau
  • Clifford Bohm
Part of the Genetic and Evolutionary Computation book series (GEVO)


Within the field of Genetic Algorithms (GA) and Artificial Intelligence (AI) a variety computational substrates with the power to find solutions to a large variety of problems have been described. Research has specialized on different computational substrates that each excel in different problem domains. For example, Artificial Neural Networks (ANN) (Russell et al., Artificial intelligence: a modern approach, vol 2. Prentice Hall, Upper Saddle River, 2003) have proven effective at classification, Genetic Programs (by which we mean mathematical tree-based genetic programming and will abbreviate with GP) (Koza, Stat Comput 4:87–112, 1994) are often used to find complex equations to fit data, Neuro Evolution of Augmenting Topologies (NEAT) (Stanley and Miikkulainen, Evolut Comput 10:99–127, 2002) is good at robotics control problems (Cully et al., Nature 521:503, 2015), and Markov Brains (MB) (Edlund et al., PLoS Comput Biol 7:e1002,236, 2011; Marstaller et al., Neural Comput 25:2079–2107, 2013; Hintze et al., Markov brains: a technical introduction. arXiv:1709.05601, 2017) are used to test hypotheses about evolutionary behavior (Olson et al., J R Soc Interf 10:20130,305, 2013) (among many other examples). Given the wide range of problems and vast number of computational substrates practitioners of GA and AI face the difficulty that every new problem requires an assessment to find an appropriate computational substrates and specific parameter tuning to achieve optimal results.



This work was in part funded by the NSF BEACON Center for the Study of Evolution in Action, DBI-0939454. We thank Ken Stanley, Joel Lehman, and Randal Olson for insightful discussions on HyperNEAT and Markov Brain crossovers.


  1. 1.
    Adami, C., Brown, C.T.: Evolutionary learning in the 2d artificial life system avida. In: Artificial Life IV, vol. 1194, pp. 377–381. Cambridge, MA: MIT Press (1994)Google Scholar
  2. 2.
    Adami, C., Schossau, J., Hintze, A.: Evolutionary game theory using agent-based methods. Physics of Life Reviews 19, 1–26 (2016)CrossRefGoogle Scholar
  3. 3.
    Albantakis, L., Hintze, A., Koch, C., Adami, C., Tononi, G.: Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Computational Biology 10, e1003,966 (2014)CrossRefGoogle Scholar
  4. 4.
    Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics 13, 834–846 (1983)CrossRefGoogle Scholar
  5. 5.
    Beer, R.D., et al.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. From Animals to Animats 4, 421–429 (1996)Google Scholar
  6. 6.
    Bohm, C., CG, N., Hintze, A.: MABE (modular agent based evolver): A framework for digital evolution research. Proceedings of the European Conference of Artificial Life (2017)Google Scholar
  7. 7.
    Cully, A., Clune, J., Tarapore, D., Mouret, J.B.: Robots that can adapt like animals. Nature 521, 503 (2015)CrossRefGoogle Scholar
  8. 8.
    Edlund, J.A., Chaumont, N., Hintze, A., Koch, C., Tononi, G., Adami, C.: Integrated information increases with fitness in the evolution of animats. PLoS Computational Biology 7, e1002,236 (2011)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990)CrossRefGoogle Scholar
  10. 10.
    Goldman, B.W., Punch, W.F.: Parameter-less population pyramid. In: GECCO ‘14: Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, pp. 785–792. ACM, Vancouver, BC, Canada (2014).Google Scholar
  11. 11.
    Grabowski, L.M., Bryson, D.M., Dyer, F.C., Ofria, C., Pennock, R.T.: Early evolution of memory usage in digital organisms. In: ALIFE, pp. 224–231. Citeseer (2010)Google Scholar
  12. 12.
    Hintze, A., et al.: Markov Brains: A Technical Introduction. arXiv preprint arXiv:1709.05601 (2017)Google Scholar
  13. 13.
    Hintze, A., Miromeni, M.: Evolution of autonomous hierarchy formation and maintenance. Artificial Life 14, 366–367 (2014)Google Scholar
  14. 14.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)CrossRefGoogle Scholar
  15. 15.
    James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: Proc. of Genetic and Evolutionary Computation Conference (2004)Google Scholar
  16. 16.
    Jordan, M.I.: Serial order: A parallel distributed processing approach. In: Advances in Psychology, vol. 121, pp. 471–495. Elsevier (1997)Google Scholar
  17. 17.
    Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 99–134 (1998)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4, 87–112 (1994)CrossRefGoogle Scholar
  19. 19.
    Kvam, P., Cesario, J., Schossau, J., Eisthen, H., Hintze, A.: Computational evolution of decision-making strategies. arXiv preprint arXiv:1509.05646 (2015)Google Scholar
  20. 20.
    Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)Google Scholar
  21. 21.
    Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Computation 25, 2079–2107 (2013)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Merrild, J., Rasmussen, M.A., Risi, S.: Hyperentm: Evolving scalable neural turing machines through hyperneat. arXiv preprint arXiv:1710.04748 (2017)Google Scholar
  23. 23.
    Miller, J.F.: Cartesian genetic programming. In: Cartesian Genetic Programming, pp. 17–34. Springer (2011)Google Scholar
  24. 24.
    Mouret, J.B., Clune, J.: Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015)Google Scholar
  25. 25.
    Olson, R.S., Hintze, A., Dyer, F.C., Knoester, D.B., Adami, C.: Predator confusion is sufficient to evolve swarming behaviour. Journal of The Royal Society Interface 10, 20130,305 (2013)CrossRefGoogle Scholar
  26. 26. OpenAI Gym Toolkit (2018). URL [Online; accessed 1-Jan-2018]
  27. 27.
    Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q., Kurakin, A.: Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041 (2017)Google Scholar
  28. 28.
    Russell, S.J., Norvig, P., Canny, J.F., Malik, J.M., Edwards, D.D.: Artificial Intelligence: A Modern Approach, vol. 2. Prentice Hall Upper Saddle River (2003)Google Scholar
  29. 29.
    Schaffer, C.: A conservation law for generalization performance. In: Proceedings of the 11th International Conference on Machine Learning, pp. 259–265 (1994)CrossRefGoogle Scholar
  30. 30.
    Schossau, J., Adami, C., Hintze, A.: Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy 18, 6 (2015)CrossRefGoogle Scholar
  31. 31.
    Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)Google Scholar
  32. 32.
    Sheneman, L., Hintze, A.: Evolving autonomous learning in cognitive networks. Scientific Reports 7, 16,712 (2017)CrossRefGoogle Scholar
  33. 33.
    Smith, A.W.: Neat-python (2015). URL [Online; accessed 10-31-2017]
  34. 34.
    Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artificial Life 15, 185–212 (2009)CrossRefGoogle Scholar
  35. 35.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10, 99–127 (2002)CrossRefGoogle Scholar
  36. 36.
    Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ‘13, pp. 847–855. ACM, New York, NY, USA (2013).Google Scholar
  37. 37.
    Trujillo, L., Muñoz, L., Naredo, E., Martínez, Y.: Neat, there’s no bloat. In: European Conference on Genetic Programming, pp. 174–185. Springer (2014)Google Scholar
  38. 38.
    Wikipedia: Inverted pendulum — Wikipedia, the free encyclopedia (2018). URL [Online; accessed 1-Jan-2018]
  39. 39.
    Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8, 1341–1390 (1996)CrossRefGoogle Scholar
  40. 40.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 67–82 (1997)CrossRefGoogle Scholar
  41. 41.
    Wolpert, D.H., Macready, W.G.: Coevolutionary free lunches. IEEE Transactions on Evolutionary Computation 9, 721–735 (2005)CrossRefGoogle Scholar
  42. 42.
    Wolpert, D.H., Macready, W.G., et al.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010, Santa Fe Institute (1995)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Integrative Biology and Department of Computer Science and Engineering and BEACON Center for the Study of Evolution in ActionMichigan State UniversityEast LansingUSA
  2. 2.Department of Integrative Biology and BEACON Center for the Study of Evolution in ActionMichigan State UniversityEast LansingUSA

Personalised recommendations