Advertisement

Multiagent Reinforcement Learning for a Planetary Exploration Multirobot System

  • Zhang Zheng
  • Ma Shu-gen
  • Cao Bing-gang
  • Zhang Li-ping
  • Li Bin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4088)

Abstract

In a planetary rover system called “SMC rover”, the motion coordination between robots is a key problem to be solved. Multiagent reinforcement learning methods for multirobot coordination strategy learning are investigated. A reinforcement learning based coordination mechanism is proposed for the exploration system. Four-robot climbing a slope is studied in detail as an instance. The actions of the robots are divided into two layers and realized respectively, which simplified the complexity of the climbing task. A Q-Learning based multirobot coordination strategy mechanism is proposed for the climbing mission. An OpenGL 3D simulation platform is used to verify the strategy and the learning results.

Keywords

Joint Angle Reinforcement Learning Multiagent System Coordination Strategy Single Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dias, M.B., Stentz, A.: A market approach to multirobot coordination. Technical Report CMU-RI-TR-01-26, Robotics Institute, Carnegie Mellon University (2001)Google Scholar
  2. 2.
    Yang, E., Gu, D.: Multiagent Reinforcement Learning for Multi-Robot Systems: A Survey. Technical Report CSM–404, Department of Computer Science, University of Essex (2004)Google Scholar
  3. 3.
    Kawakami, A., Torii, A., Motomura, K.: SMC Rover: Planetary Rover with Transformable wheels. In: Siciliano, B. (ed.) Experimental Robotics VIII. Advanced Robotics Series, pp. 498–506. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Damoto, R., Kawakami, A., Hirose, S.: Study of super-mechano colony: concept and basic experimental set-up. Advanced Robotics 15(4), 391–408 (2001)CrossRefGoogle Scholar
  5. 5.
    Shoham, Y., Powers, R., Grenager, T.: Multi-agent reinforcement learning: a critical survey. Technical report, Stanford University (2003)Google Scholar
  6. 6.
    Liu, J., Jin, X., Zhang, S.: Autonomous Agents and Multi-Agent Systems: Explorations in Learning. In: Self-Organization and Adaptive Computation, November 2003, Tsinghua Univer-sity Press (2003)Google Scholar
  7. 7.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An introduction. MIT Press, Cambridge, MA (1998)Google Scholar
  8. 8.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research (1996)Google Scholar
  9. 9.
    Mitchell, T.M.: Machine Learning. China Machine Press (March 2003)Google Scholar
  10. 10.
    Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Pro-ceedings of the Tenth International Conference on Machine Learning, pp. 330–337 (1993)Google Scholar
  11. 11.
    Zhang, Z., Ma, S.G., Li, B., Zhang, L.P., Gang, B.G.: OpenGL Based Experimental Plat-form for Simulation of Reconfigurable Planetary Robot System. Journal of System Simula-tion 17(4), 885–888 (2005)Google Scholar
  12. 12.
    Zhang, Z., Ma, S.G., Li, B., Zhang, L.P., Cao, B.G.: Communication Mechanism Study of a Planetary Exploration Multi-Robot System. Robot 28(3), 309–315 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Zhang Zheng
    • 1
    • 2
  • Ma Shu-gen
    • 1
    • 2
    • 3
  • Cao Bing-gang
    • 1
  • Zhang Li-ping
    • 1
    • 2
  • Li Bin
    • 2
  1. 1.School of Mechanical EngineeringXi’an Jiaotong UniversityXi’anChina
  2. 2.Robotics Laboratory, Shenyang Institute of AutomationChinese Academy of SciencesShenyangChina
  3. 3.COE Research InstituteRitsumeikan UniversityKusatsuJapan

Personalised recommendations