Advertisement

Evolving Reinforcement Learning-Like Abilities for Robots

  • Jesper Blynel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2606)

Abstract

In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcementlearning like abilities. The investigated tasks were generation and learning of short bit sequences. This “learning” came about without modifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain “knowledge” while moving around different mazes. The evolved controllers are analyzed and the results are discussed.

Keywords

Hide Neuron Sensory Receptor Neural Controller Black Stripe Dynamical Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Blynel and D. Floreano. Levels of dynamics and adaptive behaviour in evolutionary neural controllers. In Hallam et al. [2], pages 272–281.Google Scholar
  2. 2.
    B. Hallam, D. Floreano, J. Hallam, G Hayes, and J-A Meyer, editors. From Animals to Animats 7: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior. MIT Press-Bradford Books, Cambridge, MA, 2002.Google Scholar
  3. 3.
    O. Miglino, H. H. Lund, and S. Nolfi. Evolving Mobile Robots in Simulated and Real Environments. Artificial Life, 2(4):417–434, 1995.CrossRefGoogle Scholar
  4. 4.
    R. Sutton and A. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, 1998.Google Scholar
  5. 5.
    M. Thieme and T. Ziemke. The road sign problem revisited: Handling delayed response tasks with neural robot controllers. In Hallam et al. [2], pages 228–229.Google Scholar
  6. 6.
    E. Tuci, I. Harvey, and M. Quinn. Evolving integrated controllers for autonomous learning robots using dynamic neural networks. In Hallam et al. [2], pages 282–291.Google Scholar
  7. 7.
    B. Yamauchi and R. D. Beer. Integrating reactive, sequential, and learning behaviour using dynamical neural networks. In D. Cliff, P. Husbands, J. Meyer, and S. W. Wilson, editors, From Animals to Animats III: Proceedings of the Third International Conference on Simulation of Adaptive Behavior, pages 382–391. MIT Press-Bradford Books, Cambridge, MA, 1994.Google Scholar
  8. 8.
    B. Yamauchi and R. D. Beer. Sequential behavior and learning in evolved dynamical neural networks. Adaptive Behavior, 2(3):219–246, 1994.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Jesper Blynel
    • 1
  1. 1.Autonomous Systems LabInstitute of Systems Engineering Swiss Federal Institute of Technology (EPFL)LausanneSwitzerland

Personalised recommendations