Skip to main content

The Evolution of Representations in Genetic Programming Trees

  • 444 Accesses

Part of the Genetic and Evolutionary Computation book series (GEVO)

Abstract

Artificially intelligent machines have to explore their environment, store information about it, and use this information to improve future decision making. As such, the quest is to either provide these systems with internal models about their environment or to imbue machines with the ability to create their own models—ideally the later. These models are mental representations of the environment, and we have previously shown that neuroevolution is a powerful method to create artificially intelligent machines (also referred to as agents) that can form said representations. Furthermore, we have shown that one can quantify representations and use that quantity to augment the performance of a genetic algorithm. Instead of just optimizing for performance, one can also positively select for agents that have better representations. The neuroevolutionary approach, that improves performance and lets these agents develop representations, works well for Markov Brains, which are a form of Cartesian Genetic Programming network. Conventional artificial neural networks and their recurrent counterparts, RNNs and LSTMs, are however primarily trained by backpropagation and not evolved, and they behave differently with respect to their ability to form representations. When evolved, RNNs and LSTMs do not form sparse and distinct representations, they “smear” the information about individual concepts of the environment over all nodes in the system. This ultimately makes these systems more brittle and less capable. The question we seek to address, now, is how can we create systems that evolve to have meaningful representations while preventing them from smearing these representations? We look at genetic programming trees as an interesting computational paradigm, as they can take a lot of information in through their various leaves, but at the same time condense that computation into a single node in the end. We hypothesize that this computational condensation could also prevent the smearing of information. Here, we explore how these tree structures evolve and form representations, and we test to what degree these systems either “smear” or condense information.

Keywords

  • Neuroevolution
  • Artificial intelligence
  • Cognitive representations
  • Markov brain

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-39958-0_7
  • Chapter length: 23 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   139.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-39958-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   179.99
Price excludes VAT (USA)
Hardcover Book
USD   179.99
Price excludes VAT (USA)
Fig. 7.1
Fig. 7.2
Fig. 7.3
Fig. 7.4
Fig. 7.5
Fig. 7.6
Fig. 7.7
Fig. 7.8
Fig. 7.9
Fig. 7.10
Fig. 7.11
Fig. 7.12

Notes

  1. 1.

    Observe that the term representations in computer science sometimes also refers to the structure of data, or how an algorithm, for example, is encoded. We mean neither, but instead use the term representation to be about the information a cognitive system has about its environment as defined in Marstaller et al. [19]. The term representation, as we use it, is adapted from the fields of psychology and philosophy.

  2. 2.

    Inspired by the multiple trees used to encode memory in Langdon [18].

References

  1. Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic Programming - An Introduction. Morgan Kaufmann, San Francisco CA (1998)

    MATH  CrossRef  Google Scholar 

  2. Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4), 209–243 (2003)

    CrossRef  Google Scholar 

  3. Beer, R.D., et al.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. In: From Animals to Animats, vol. 4, pp. 421–429 (1996)

    Google Scholar 

  4. Bengio, Y., Frasconi, P.: An input output hmm architecture. In: Advances in neural information processing systems, pp. 427–434 (1995)

    Google Scholar 

  5. Bohm, C., CG, N., Hintze, A.: MABE (modular agent based evolver): A framework for digital evolution research. In: Proceedings of the European Conference of Artificial Life (2017)

    Google Scholar 

  6. Brooks, R.A.: Intelligence without representation. Artificial intelligence 47(1-3), 139–159 (1991)

    CrossRef  Google Scholar 

  7. Clune, J., Stanley, K.O., Pennock, R.T., Ofria, C.: On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation 15(3), 346–367 (2011)

    CrossRef  Google Scholar 

  8. Deb, K.: Multi-objective optimization using evolutionary algorithms. John Wiley & Sons (2001)

    Google Scholar 

  9. Floreano, D., Dürr, P., Mattiussi, C.: Neuroevolution: From architectures to learning. Evolutionary Intelligence 1(1), 47–62 (2008)

    CrossRef  Google Scholar 

  10. Handley, S.G.: The automatic generations of plans for a mobile robot via genetic programming with automatically defined functions. In: Advances in Genetic Programming, vol. 18, pp. 391–407. MIT Press (1994)

    Google Scholar 

  11. Hintze, A., Edlund, J.A., Olson, R.S., Knoester, D.B., Schossau, J., Albantakis, L., Tehrani-Saleh, A., Kvam, P., Sheneman, L., Goldsby, H., Bohm, C., Adami, C.: Markov brains: A technical introduction. arXiv preprint arXiv:1709.05601 (2017)

    Google Scholar 

  12. Hintze, A., Kirkpatrick, D., Adami, C.: The structure of evolved representations across different substrates for artificial intelligence. In: Artificial Life Conference Proceedings, pp. 388–395. MIT Press (2018)

    Google Scholar 

  13. Hintze, A., Schossau, J., Bohm, C.: The evolutionary Buffet method. In: Genetic Programming Theory and Practice XVI, pp. 17–36. Springer (2019). https://link.springer.com/chapter/10.1007/978-3-030-04735-1_2

  14. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)

    CrossRef  Google Scholar 

  15. Kirkpatrick, D., Hintze, A.: The role of ambient noise in the evolution of robust mental representations in cognitive systems. Artif. Life Conf. Proc. (31), 432–439 (2019). https://doi.org/10.1162/isal_a_00198

    Google Scholar 

  16. Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4(2), 87–112 (1994)

    CrossRef  Google Scholar 

  17. Koza, J.R., Rice, J.P.: Automatic programming of robots using genetic programming. In: AAAI, vol. 92, pp. 194–207 (1992)

    Google Scholar 

  18. Langdon, W.B.: Evolving data structures with genetic programming. In: Int. Conference on Genetic Algorithms, pp. 295–302 (1995)

    Google Scholar 

  19. Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Computation 25(8), 2079–2107 (2013)

    MathSciNet  CrossRef  Google Scholar 

  20. Merritt, D.J., Brannon, E.M.: Nothing to it: Precursors to a zero concept in preschoolers. Behavioural Processes 93, 91–97 (2013)

    CrossRef  Google Scholar 

  21. Merritt, D.J., Rugani, R., Brannon, E.M.: Empty sets as part of the numerical continuum: conceptual precursors to the zero concept in rhesus monkeys. Journal of Experimental Psychology: General 138(2), 258 (2009)

    CrossRef  Google Scholar 

  22. Miller, J.F.: Cartesian Genetic Programming. Springer (2011)

    Google Scholar 

  23. Miller, J.F.: Cartesian genetic programming. In: Cartesian Genetic Programming, pp. 17–34. Springer (2011)

    Google Scholar 

  24. Nieder, A.: Honey bees zero in on the empty set. Science 360(6393), 1069–1070 (2018)

    CrossRef  Google Scholar 

  25. Nordin, P.: A compiling genetic programming system that directly manipulates the machine code. In: Advances in Genetic Programming, vol. 1, pp. 311–331. MIT Press (1994)

    Google Scholar 

  26. Nordin, P., Banzhaf, W.: Genetic programming controlling a miniature robot. In: Working Notes for the AAAI Symposium on Genetic Programming, vol. 61, p. 67. MIT, Cambridge, MA, USA, AAAI (1995)

    Google Scholar 

  27. Nordin, P., Banzhaf, W.: An on-line method to evolve behavior and to control a miniature robot in real time with genetic programming. Adaptive Behavior 5(2), 107–140 (1997)

    CrossRef  Google Scholar 

  28. Nordin, P., Banzhaf, W.: Real time control of a khepera robot using genetic programming. Control and Cybernetics 26, 533–562 (1997)

    MathSciNet  Google Scholar 

  29. Reynolds, C.W.: An evolved, vision-based behavioral model of coordinated group motion. In: Proc From Animals to Animats, vol. 2, pp. 384–392 (1993)

    Google Scholar 

  30. Reynolds, C.W.: Evolution of obstacle avoidance behavior: using noise to promote robust solutions. In: Advances in Genetic Programming, vol. 1, pp. 221–241. Cambridge, MA: MIT Press (1994)

    Google Scholar 

  31. Russell, S.J., Norvig, P., Canny, J.F., Malik, J.M., Edwards, D.D.: Artificial Intelligence: A Modern Approach. Prentice Hall, Upper Saddle River (2003)

    Google Scholar 

  32. Schossau, J., Adami, C., Hintze, A.: Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy 18(1), 6 (2015)

    CrossRef  Google Scholar 

  33. Spector, L., Robinson, A.: Genetic programming and autoconstructive evolution with the push programming language. Genetic Programming and Evolvable Machines 3(1), 7–40 (2002)

    MATH  CrossRef  Google Scholar 

  34. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  35. Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019)

    CrossRef  Google Scholar 

  36. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002)

    CrossRef  Google Scholar 

  37. Teller, A.: The evolution of mental models. In: Advances in Genetic Programming, pp. 199–220. MIT Press (1994)

    Google Scholar 

  38. Thomas, B.: Evolutionary algorithms in theory and practice. Oxford University Press, New York (1996)

    MATH  Google Scholar 

  39. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)

    CrossRef  Google Scholar 

  40. Zhou, A., Qu, B.Y., Li, H., Zhao, S.Z., Suganthan, P.N., Zhang, Q.: Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation 1(1), 32–49 (2011)

    CrossRef  Google Scholar 

Download references

Acknowledgements

We thank Stephan Winkler for insightful discussions on hidden states in genetic programming trees, and for implementing the prototype for GP-Forests in EvoSphere.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Douglas Kirkpatrick .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Kirkpatrick, D., Hintze, A. (2020). The Evolution of Representations in Genetic Programming Trees. In: Banzhaf, W., Goodman, E., Sheneman, L., Trujillo, L., Worzel, B. (eds) Genetic Programming Theory and Practice XVII. Genetic and Evolutionary Computation. Springer, Cham. https://doi.org/10.1007/978-3-030-39958-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-39958-0_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-39957-3

  • Online ISBN: 978-3-030-39958-0

  • eBook Packages: Computer ScienceComputer Science (R0)