Efficient Non-linear Control Through Neuroevolution

  • Faustino Gomez
  • Jürgen Schmidhuber
  • Risto Miikkulainen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)


Many complex control problems are not amenable to traditional controller design. Not only is it difficult to model real systems, but often it is unclear what kind of behavior is required. Reinforcement learning (RL) has made progress through direct interaction with the task environment, but it has been difficult to scale it up to large and partially observable state spaces. In recent years, neuroevolution, the artificial evolution of neural networks, has shown promise in tasks with these two properties. This paper introduces a novel neuroevolution method called CoSyNE that evolves networks at the level of weights. In the most extensive comparison of RL methods to date, it was tested in difficult versions of the pole-balancing problem that involve large state spaces and hidden state. CoSyNE was found to be significantly more efficient and powerful than the other methods on these tasks, forming a promising foundation for solving challenging real-world control tasks.


Reinforcement Learning Synaptic Weight Cerebellar Model Articulation Controller Covariance Matrix Adaptation Evolutionary Strategy Large State Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  2. 2.
    Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9) (1999)Google Scholar
  3. 3.
    Moriarty, D.E.: Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. PhD thesis, University of Texas at Austin, Tech. Rep. UT-AI97-257 (1997)Google Scholar
  4. 4.
    Potter, M.A., De Jong, K.A.: Evolving neural networks with collaborative species. In: Proceedings of the 1995 Summer Computer Simulation Conference (1995)Google Scholar
  5. 5.
    Gomez, F.J.: Robust Nonlinear Control through Neuroevolution. PhD thesis, University of Texas at Austin, Tech. Rep. AI-TR-03-303 (2003)Google Scholar
  6. 6.
    Wieland, A.: Evolving neural network controllers for unstable systems. In: Proceedings of the International Joint Conference on Neural Networks Seattle, WA, pp. 667–673. IEEE, Piscataway (1991)CrossRefGoogle Scholar
  7. 7.
    Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems, vol. 12, pp. 1057–1063. MIT Press, Cambridge (2000)Google Scholar
  8. 8.
    Meuleau, N., Peshkin, L., Kim, K.E., Kaelbling, L.P.: Learning finite state controllers for partially observable environments. In: 15th International Conference of Uncertainty in AI (1999)Google Scholar
  9. 9.
    Baird, L.C., Moore, A.W.: Gradient descent reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 12 (1999)Google Scholar
  10. 10.
    Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine Learning 8(3), 279–292 (1992)zbMATHGoogle Scholar
  11. 11.
    Santamaria, J.C., Sutton, R.S., Ram, A.: Experiments with reinforcement learning in problems with continuous state and action spaces. Adaptive Behavior 6(2) (1998)Google Scholar
  12. 12.
    Anderson, C.W.: Strategy learning with multilayer connectionist representations. Technical Report TR87-509.3, GTE Labs, Waltham, MA (1987)Google Scholar
  13. 13.
    Saravanan, N., Fogel, D.B.: Evolving neural control systems. IEEE Expert (1995)Google Scholar
  14. 14.
    Gruau, F., Whitley, D., Pyeatt, L.: A comparison between cellular encoding and direct encoding for genetic neural networks. NC-TR-96-048, NeuroCOLT (1996)Google Scholar
  15. 15.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  16. 16.
    Igel, C.: Neuroevolution for reinforcement learning using evolution strategies. In: Proceedings of the Congress on Evolutionary Computation. IEEE, Los Alamitos (2003)Google Scholar
  17. 17.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10, 99–127 (2002)CrossRefGoogle Scholar
  18. 18.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Faustino Gomez
    • 1
  • Jürgen Schmidhuber
    • 1
    • 2
  • Risto Miikkulainen
    • 3
  1. 1.Dalle Molle Institute for Artificial Intelligence (IDSIA)Lugano
  2. 2.Technische Universität MünchenGarchingGermany
  3. 3.Department of Computer SciencesUniversity of TexasAustinUSA

Personalised recommendations