Advertisement

Autonomous Learning of Ball Trapping in the Four-Legged Robot League

  • Hayato Kobayashi
  • Tsugutoyo Osaki
  • Eric Williams
  • Akira Ishino
  • Ayumi Shinohara
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4434)

Abstract

This paper describes an autonomous learning method used with real robots in order to acquire ball trapping skills in the four-legged robot league. These skills involve stopping and controlling an oncoming ball and are essential to passing a ball to each other. We first prepare some training equipment and then experiment with only one robot. The robot can use our method to acquire these necessary skills on its own, much in the same way that a human practicing against a wall can learn the proper movements and actions of soccer on his/her own. We also experiment with two robots, and our findings suggest that robots communicating between each other can learn more rapidly than those without any communication.

Keywords

Active Learner Trap Action Quadruped Robot Reinforcement Learning Algorithm Autonomous Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Stone, P., Sutton, R.S., Kuhlmann, G.: Reinforcement learning for robocup soccer keepaway. Adaptive Behavior 13(3), 165–188 (2005)CrossRefGoogle Scholar
  2. 2.
    Hsu, W.H., Harmon, S.J., Rodriguez, E., Zhong, C.: Empirical comparison of incremental reuse strtegies in genetic programming for keep-away soccer. In: Late Breaking Papers at the 2004 Genetic and Evolutionary Computation Conference (2004)Google Scholar
  3. 3.
    Hornby, G.S., Takamura, S., Yamamoto, T., Fujita, M.: Autonomous evolution of dynamic gaits with two quadruped robots. IEEE Transactions on Robotics 21(3), 402–410 (2005)CrossRefGoogle Scholar
  4. 4.
    Kim, M.S., Uther, W.: Automatic gait optimisation for quadruped robots. In: Proceedings of 2003 Australasian Conference on Robotics and Automation, pp. 1–9 (2003)Google Scholar
  5. 5.
    Kohl, N., Stone, P.: Machine learning for fast quadrupedal locomotion. In: The Nineteenth National Conference on Artificial Intelligence, pp. 611–616 (2004)Google Scholar
  6. 6.
    Weingarten, J.D., Lopes, G.A.D., Buehler, M., Groff, R.E., Koditschek, D.E.: Automated gait adaptation for legged robots. In: IEEE International Conference on Robotics and Automation, IEEE Computer Society Press, Los Alamitos (2004)Google Scholar
  7. 7.
    Chernova, S., Veloso, M.: Learning and using models of kicking motions for legged robots. In: Proceedings of International Conference on Robotics and Automation (2004)Google Scholar
  8. 8.
    Zagal, J.C., Ruiz-del-Solar, J.: Learning to kick the ball using back to reality. In: Nardi, D., Riedmiller, M., Sammut, C., Santos-Victor, J. (eds.) RoboCup 2004. LNCS (LNAI), vol. 3276, pp. 335–347. Springer, Heidelberg (2005)Google Scholar
  9. 9.
    Fidelman, P., Stone, P.: Learning ball acquisition on a physical robot. In: 2004 International Symposium on Robotics and Automation (ISRA) (2004)Google Scholar
  10. 10.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  11. 11.
    Kretchmar, R.M.: Parallel reinforcement learning. In: The 6th World Conference on Systemics, Cybernetics, and Informatics (2002)Google Scholar
  12. 12.
    Stone, P., Veloso, M.M.: Layered learning. In: de Mántaras, R.L., Plaza, E. (eds.) ECML 2000. LNCS (LNAI), vol. 1810, pp. 369–381. Springer, Heidelberg (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Hayato Kobayashi
    • 1
  • Tsugutoyo Osaki
    • 2
  • Eric Williams
    • 2
  • Akira Ishino
    • 3
  • Ayumi Shinohara
    • 2
  1. 1.Department of Informatics, Kyushu UniversityJapan
  2. 2.Graduate School of Information Science, Tohoku UniversityJapan
  3. 3.Office for Information of University Evaluation, Kyushu UniversityJapan

Personalised recommendations