Evolution versus Learning in Temporal Neural Networks
In this paper, we study the difference between two ways of setting synaptic weights in a “temporal” neural network. Used as a controller of a simulated mobile robot, the neural network is alternatively evolved through an evolutionary algorithm or trained via an hebbian reinforcement learning rule. We compare both approaches and argue that in the last instance only the learning paradigm is able to exploit meaningfully the temporal features of the neural network.
KeywordsMobile Robot Synaptic Weight Learning Paradigm Neural Controller Spike Timing Dependent Plasticity
Unable to display preview. Download preview PDF.
- D. Floreano and C. Mattiusi. Evolution of spiking neural controllers for autonomous vision-based robots. In T. Gomi, editor, Evolutionnary Robotics. Berlin: Springer-Verlag, 2001.Google Scholar
- H. Soula, G. Beslon, and J. Favrel. Evolving spiking neurons nets to control an animat. In Proc. of ICANNGA 20003, Roanne, pages 193–197. Springer Verlag, 2003.Google Scholar
- W. Maas and C. Bishop, editors. Pulsed Neural Networks. MIT Press, Cambridge, Massachusetts, 2001.Google Scholar
- G. Beslon, H. Soula, and J. Favrel. A neural model for animats brain. In Proc of ICANNGA 2001, Prague, pages 352–355, 2001.Google Scholar
- H. Soula, G. Beslon, and J. Favrel. Controlling an animat with a self-organized modular neural network. In Proc. of EWLR’2001, Prague, pages 39–46, 2001.Google Scholar
- D.O. Hebb. The Organization of Behavior. J. Wiley and Sons, New York, 1949.Google Scholar