Evolving state and memory in genetic programming

  • Simon E. Raik
  • David G. Browne
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1285)


This paper examines the use of state and memory in autonomous vehicle controllers. The controllers are computer programs genetically evolved according to the genetic programming paradigm. Specifically, the performance between implicit and explicit state is compared during the execution of a dynamic obstacle avoidance task. A control group, in which controllers possess no form of state, is used for comparison. The results indicate that while the use of implicit state performed better than a stateless controller, the use of explicit state proved superior to both other models. A reason for the performance difference is discussed in relation to a trade-off between the number of representable states in a program of fixed size and the number of instructions executed in determining each action to perform.


state autonomous agents genetic programming 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Colombetti, M. and M. Dorigo (1994). Training agents to perform sequential behavior. Adaptive Behaviour 2(3).Google Scholar
  2. Koza, J. R. (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge Mass/London England: The MIT Press.Google Scholar
  3. Koza, J. R. (1994, May). Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge Massachusetts: MIT Press.Google Scholar
  4. Langdon, W. B. (1995a, 15–19 July). Evolving data structures using genetic programming. In L. Eshelman (Ed.), Genetic Algorithms: Proceedings of the Sixth International Conference (ICGA95), Pittsburgh, PA, USA, pp. 295–302. Morgan Kaufmann.Google Scholar
  5. Langdon, W. B. (1995b, January). Evolving data structures using genetic programming. Research Note RN/95/1, UCL, Gower Street, London, WCIE 6BT, UK.Google Scholar
  6. Lin, L.-J. and T. M. Mitchell (1993). Reinforcement learning with hidden states. In Proceedings of the 2nd International Conference on Simulation of Adaptive Behavior, Hawaii.Google Scholar
  7. Littman, M. L. (1994). Memoryless policies: Theoretical limitations and practical results. In Simulation of Adaptive Behaviour (SAB-94), pp. 238–245.Google Scholar
  8. Raik, S. E. (1996, 9–11 August). Parallel genetic artificial life. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA-96), New Horizons, Sunnyvale, California, USA. CSREA.Google Scholar
  9. Raik, S. E. and D. G. Browne (1996, July). Implicit versus explicit: A comparison of state in genetic programming. In J. R. Koza (Ed.), Late breaking papers of the Genetic Programming 1996 Conference, Stanford, California, USA, pp. 151–159. Stanford Bookstore.Google Scholar
  10. Ryan, C. (1995, 10–12 November). GPRobots and GPTeams — competition, co-evolution and co-operation in genetic programming. In E. S. Siegel and J. R. Koza (Eds.), Working Notes for the AAAI Symposium on Genetic Programming, MIT, Cambridge, MA, USA, pp. 86–93. AAAI.Google Scholar
  11. Teller, A. (1994). The evolution of mental models. In K. E. Kinnear (Ed.), Advances in Genetic Programming, Complex Adaptive Systems, pp. 199–219. Cambridge, Massachusetts: The MIT Press.Google Scholar
  12. Watkins, C. (1989). Learning from Delayed Rewards. Ph. D. thesis, Cambridge University.Google Scholar
  13. Wilson, S. W. (1991). The animat path to AI. In J.-A. Meyer and S. W. Wilson (Eds.), From animals to animats, pp. 15–21. First International Conference on Simulation of Adaptive Behavior.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Simon E. Raik
    • 1
  • David G. Browne
    • 1
  1. 1.Monash UniversityCaulfield EastAustralia

Personalised recommendations