Cooperative Motor Learning Model for Cerebellar Control of Balance and Locomotion

  • Mingxiao Ding
  • Naigong Yu
  • Xiaogang Ruan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


A computational model in the framework of reinforcement learning, called Cooperative Motor Learning (CML) model, is proposed to realize cerebellar control of balance and locomotion. In the CML model, cerebellum is a parallel pathway that the vermis and the flocculonodular lobe play a role of reflex actions executor, and that the intermediate zone of the cerebellum participates in initiating voluntary actions. During the training phase, the cerebral cortex provides the predictive error through climbing fiber to modulate the concurrently activated synapses between parallel fibers and purkinje cells. Meanwhile, the intermediate zone of the cerebellum computes the temporal difference (TD) error as a training signal for the cerebral cortex. In the simulation experiment for the balance of double inverted pendulum on a cart, a well-trained CML model can smoothly push the pendulum into the equilibrium position.


Cerebral Cortex Purkinje Cell Reinforcement Learning Intermediate Zone Parallel Fiber 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Albus, J.S.: A Theory of Cerebellar Function. Mathematical Bioscience 10, 25–61 (1971)CrossRefGoogle Scholar
  2. 2.
    Marr, D.: A Theory of Cerebellar Cortex. Journal of Physiology 202, 437–470 (1969)Google Scholar
  3. 3.
    Ito, M.: Neural Design of the Cerebellar Motor Control System. Brain Research 40, 81–84 (1972)CrossRefGoogle Scholar
  4. 4.
    Kawato, M., Gomi, H.: A Computational Model of Four Regions of the Cerebellum Based on Feedback Error Learning. Biological Cybernetics 69, 95–103 (1992)CrossRefGoogle Scholar
  5. 5.
    Schultz, W., Dayan, P., Montague, P.R.: A Neural Substrate of Prediction and Reward. Science 275, 1593–1599 (1997)CrossRefGoogle Scholar
  6. 6.
    Schmajuk, N.A., Dicarlo, J.J.: A Neural Network Approach to Hippocampal Function in Classical Conditioning. Behavioral Neuroscience 105, 82–110 (1991)CrossRefGoogle Scholar
  7. 7.
    Bao, S., Chan, V.T., Merzenich, M.M.: Cortical Remodelling Induced by Activity of Ventral Tegmental Dopamine Neurons. Nature 412, 79–83 (2001)CrossRefGoogle Scholar
  8. 8.
    Thompson, R.F., Thompson, J.K., Kim, J.J., Krupa, D.J., Shinkman, P.G.: The Nature of Reinforcement in Cerebellar Learning. Neurobiology of learning and memory 70, 150–176 (1998)CrossRefGoogle Scholar
  9. 9.
    Fuster, J.M.: Executive Frontal Functions. Experimental Brain Research 133, 66–70 (2000)CrossRefGoogle Scholar
  10. 10.
    Suri, R.E., Sejnowski, T.J.: Spike Propagation Synchronized by Temporally Asymmetric Hebbian Learning. Biological Cybernetics 87, 440–445 (2002)MATHCrossRefGoogle Scholar
  11. 11.
    Houk, J.C., Buckingham, J.T., Barto, A.G.: Models of the Cerebellum and Motor Learning. Behavioral and Brain Sciences 19, 368–383 (1996)Google Scholar
  12. 12.
    Williams, R.J., Zipser, D.: A Learning Algorithm for Continually Running Fully Re-current Neural Networks. Neural Computation 1, 270–280 (1989)CrossRefGoogle Scholar
  13. 13.
    Morton, S.M., Bastian, A.J.: Cerebellar Control of Balance and Locomotion. The Neuroscientist 10, 247–259 (2004)CrossRefGoogle Scholar
  14. 14.
    Stein, J.F., Glickstein, M.: Role of The Cerebellum in Visual Guidance of Movement. Physiological Reviews 74, 967–1017 (1992)Google Scholar
  15. 15.
    Matsumura, M., Sadato, N., Kochiyama, T., Nakamura, S., Naito, E., Matsunami, K., Ka-washima, R., Fukuda, H., Yonekura, Y.: Role of the Cerebellum in Implicit Motor Skill Learning: A PET Study. Brain Research Bulletin 63, 471–483 (2004)CrossRefGoogle Scholar
  16. 16.
    Yi, J.Q., Yubazaki, N., Hirota, K.: Stabilization Control of Series-type Double In-verted Pendulum Systems Using the SIRMs Dynamically Connected Fuzzy Inference Model. Artificial Intelligence in Engineering 15, 297–308 (2001)CrossRefGoogle Scholar
  17. 17.
    Barto, A.G., Sutton, R.S., Anderson, C.: Neuron-like Adaptive Elements That Can Solve Difficult Learning Control Problems. IEEE Transactions on Systems, Man, and Cybernetics 13, 834–846 (1983)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Mingxiao Ding
    • 1
  • Naigong Yu
    • 1
  • Xiaogang Ruan
    • 1
  1. 1.Beijing University of TechnologyBeijingChina

Personalised recommendations