Evolving Reinforcement Learning-Like Abilities for Robots
In  Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcementlearning like abilities. The investigated tasks were generation and learning of short bit sequences. This “learning” came about without modifications of synaptic strengths, but simply from internal dynamics of the evolved networks. In this paper this approach will be extended to two embodied agent tasks, where simulated robots have acquire and retain “knowledge” while moving around different mazes. The evolved controllers are analyzed and the results are discussed.
KeywordsHide Neuron Sensory Receptor Neural Controller Black Stripe Dynamical Neural Network
Unable to display preview. Download preview PDF.
- 1.J. Blynel and D. Floreano. Levels of dynamics and adaptive behaviour in evolutionary neural controllers. In Hallam et al. , pages 272–281.Google Scholar
- 2.B. Hallam, D. Floreano, J. Hallam, G Hayes, and J-A Meyer, editors. From Animals to Animats 7: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior. MIT Press-Bradford Books, Cambridge, MA, 2002.Google Scholar
- 4.R. Sutton and A. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, 1998.Google Scholar
- 5.M. Thieme and T. Ziemke. The road sign problem revisited: Handling delayed response tasks with neural robot controllers. In Hallam et al. , pages 228–229.Google Scholar
- 6.E. Tuci, I. Harvey, and M. Quinn. Evolving integrated controllers for autonomous learning robots using dynamic neural networks. In Hallam et al. , pages 282–291.Google Scholar
- 7.B. Yamauchi and R. D. Beer. Integrating reactive, sequential, and learning behaviour using dynamical neural networks. In D. Cliff, P. Husbands, J. Meyer, and S. W. Wilson, editors, From Animals to Animats III: Proceedings of the Third International Conference on Simulation of Adaptive Behavior, pages 382–391. MIT Press-Bradford Books, Cambridge, MA, 1994.Google Scholar