A Hierarchical Learning System Incorporating with Supervised, Unsupervised and Reinforcement Learning

  • Jinglu Hu
  • Takafumi Sasakawa
  • Kotaro Hirasawa
  • Huiru Zheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4491)

Abstract

According to Hebb’s Cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that the brain has three different learning paradigms: supervised, unsupervised and reinforcement learning. Inspired by the above knowledge of brain, we present a hierarchical learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part and reinforcement learning (RL) part. The SL part is a main part learning input-output mapping; the UL part realizes the function localization of learning system by controlling firing strength of neurons in SL part based on input patterns; the RL part optimizes system performance by adjusting parameters in UL part. Simulation results confirm the effectiveness of the proposed hierarchical learning system.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hebb, D.: The Organization of Behavior–A Neuropsychological Theory. John Wiley & Son, New York (1949)Google Scholar
  2. 2.
    Sawaguchi, T.: Brain Structure of Intelligence and Evolution. Kaimeisha, Tokyo (1989)Google Scholar
  3. 3.
    Doya, K.: What are the Computations of the Cerebellum, the Basal Ganglia, and the Cerebral Cortex. Neural Networks 12, 961–974 (1999)CrossRefGoogle Scholar
  4. 4.
    Sasakawa, T., Hu, J., Hirasawa, K.: Self-organized Function Localization Neural Network. In: Proc. of International Joint Conference on Neural Networks (IJCNN’04), Budapest (2004)Google Scholar
  5. 5.
    Kohonen, T.: Self-Organizing Maps, 3rd edn. Springer, Heidelberg (2000)MATHGoogle Scholar
  6. 6.
    Sasakawa, T., Hu, J., Hirasawa, K.: Performance Optimization of Function Localization Neural Network by Using Reinforcement Learning. In: Proc. of International Joint Conference on Neural Networks (IJCNN’05), Montreal, pp. 1314–1319 (2005)Google Scholar
  7. 7.
    Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  8. 8.
    Solla, S., Fleisher, M.: Generalization in Feedforwad Neural Networks. In: Proc. the IEEE International Joint Conference on Neural Networks, Seattle, pp. 77–82 (1991)Google Scholar
  9. 9.
    Hagan, M., Menhaj, M.: Training Feedforward Networks with the Marqurdt Algorithm. IEEE Trans. Neural Networks 5, 989–993 (1994)CrossRefGoogle Scholar
  10. 10.
    Demuth, H., Beale, M.: Neural Network Toolbox: for use with MATLAB. The MATH WORKS Inc. (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Jinglu Hu
    • 1
  • Takafumi Sasakawa
    • 1
  • Kotaro Hirasawa
    • 1
  • Huiru Zheng
    • 2
  1. 1.Waseda University, Kitakyushu, FukuokaJapan
  2. 2.University of Ulster, N.IrelandUK

Personalised recommendations