Autonomous Control of Octopus-Like Manipulator Using Reinforcement Learning

Conference paper
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 151)

Abstract

In this paper, we apply reinforcement learning to an octopus-like manipulator. We employ grasping and calling tasks. We show that by designing the manipulator to utilize properties of the real world, the state-action space can be abstracted, and the real-time learning and lack of generalization ability problems can be solved.

Keywords

Generalization Abstraction of state-action Grasping 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Macek, K., Petrovic, I., Peric, N.: A Reinforcement Learning Approach to Obstacle Avoidance of Mobile Robots. In: 7th Int. Workshop on Advanced Motion Control, pp. 462–466 (2002)Google Scholar
  2. 2.
    Berenji, H.R.: Fuzzy & Q-Learning for Generalization of Reinforcement Learning. In: Proc. of the 5th IEEE Int. Conf. on Fuzzy Systems Fuzz-IEEE, vol. 3, pp. 2208–2214 (1996)Google Scholar
  3. 3.
    Goldberg, D.E., Holland, J.H.: Genetic Algorithms and Machine Learning. Machine Learning 3(2-3), 95–99Google Scholar
  4. 4.
    Ito, K., Fukumori, Y., Takayama, A.: Autonomous control of a real snake-like robot using reinforcement learning - Abstraction of state-action space using properties of real world. Proc. of the 3rd Int. Conf. on Intelligent Sensors, Sensor Networks and Information, 389–394 (2007)Google Scholar
  5. 5.
    Watkins, C.J.C.H., Dayan, P.: Technical Note: Q-Learning. Machine Learning 8, 279–292 (1992)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Hosei UniversityTokyoJapan

Personalised recommendations