Geometry of Dynamic Movement Primitives in Neural Space: A FORCE-Learning Approach

Conference paper
Part of the Advances in Cognitive Neurodynamics book series (ICCN)

Abstract

Dynamic movement primitives are one of key concepts for understanding dexterous and flexible movements of biological bodies. In the field of robotics engineering, simple types of nonlinear differential equations are used to generate movement primitives from demonstrations, but it remains unclear how nonlinear dynamics in the real brain can also generate movement primitives in biologically natural ways. The aim of this study is to investigate a possible role of nonlinear dynamics in random recurrent neural networks (RNNs) for skillful motor learning. We show that one-shot temporal patterns such arm reaching movements can be trained by a type of RNN-learning so-called FORCE-learning recently proposed by Sussillo and Abbott and a number of patterns are summarized as a manifold embedded in a space of synaptic weights of readout neurons. We also discuss how generalization of learning against untrained motor patterns can be achieved by identifying nonlinear coordinates (meta-parameters) on this manifold in a higher level of the central nervous system.

Keywords

Dynamic movement primitives Nonlinear dynamics Robotics Recurrent neural networks Motor learning 

Notes

Acknowledgements

We would like to thank I. Tsuda, S. Akaho and Y. Sakaguchi for fruitful discussions. We also thank D. Rodriguez for preparing arm reaching movements data. This study is partially supported by Grant-in-Aid for Scientific Research (No. 24120713), MEXT, Japan.

References

  1. 1.
    N. Bernstein, The Coordination and regulation of movements, Pergamon, 1967.Google Scholar
  2. 2.
    A.J. Ijspeert, J. Nakanishi and S. Schaal, Learning Attractor Landscapes for Learning Motor Primitives, In: Advances in neural information processing systems, 1523 (2002).Google Scholar
  3. 3.
    I. Tsuda, Toward an Interpretation of Dynamic Neural Activity in terms of Chaotic Dynamical Systems, Behavioral and Brain Sciences 24, 793 (2001).PubMedCrossRefGoogle Scholar
  4. 4.
    H. Sompolinsky, A. Crisanti and H.J. Sommers, Chaos in Random Neural Networks, Physical Review Letters 61, 259 (1988).PubMedCrossRefGoogle Scholar
  5. 5.
    H. Jaeger, W. Maass and J. Principe, Special Issue on Echo State Networks and Liquid State Machines, Neural Networks 20 287 (2007).CrossRefGoogle Scholar
  6. 6.
    D. Sussillo and L.F. Abbott, Generating Coherent Patterns of Activity from Chaotic Neural Networks, Neuron 63, 544 (2009).PubMedCentralPubMedCrossRefGoogle Scholar
  7. 7.
    T.D. Sanger, Optimal Unsupervised Learning in a Single-layer Linear Feedforward Neural Network, Neural networks 2, 459 (1989).CrossRefGoogle Scholar
  8. 8.
    T. Flash and N. Hogan, The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model, The Journal of Neuroscience 5, 1688 (1985).PubMedGoogle Scholar
  9. 9.
    E. Todorov and W. Li, A Generalized Iterative LQG Method for Locally-optimal Feedback Control of Constrained Nonlinear Stochastic Systems, In: American Control Conference, Proceedings of the 2005 IEEE, 300 (2005).Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  1. 1.Department of Physics and AstronomyKagoshima UniversityKagoshimaJapan
  2. 2.Decoding and Controlling Brain Information, PRESTO, JSTKawaguchiJapan

Personalised recommendations