Advertisement

Using Monte Carlo Search with Data Aggregation to Improve Robot Soccer Policies

  • Francesco Riccio
  • Roberto Capobianco
  • Daniele Nardi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9776)

Abstract

RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents’ goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved.

Keywords

Policy learning Reinforcement learning Humanoid robots Multi-robot systems 

References

  1. 1.
    Biswas, J., Mendoza, J.P., Zhu, D., Choi, B., Klee, S., Veloso, M.: Opponent-driven planning and execution for pass, attack, and defense in a multi-robot soccer team. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, pp. 493–500. International Foundation for Autonomous Agents and Multiagent Systems (2014)Google Scholar
  2. 2.
    Chang, K.W., Krishnamurthy, A., Agarwal, A., Daume, H., Langford, J.: Learning to search better than your teacher. In: Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pp. 2058–2066 (2015)Google Scholar
  3. 3.
    Han, K., Veloso, M.: Automated robot behavior recognition applied to robotic soccer. In: Hollerbach, J.M., Koditschek, D.E. (eds.) Robotics Research, pp. 249–256. Springer, London (2000).  https://doi.org/10.1007/978-1-4471-0765-1_30. Also in the Proceedings of IJCAI-99 Workshop on Team Behaviors and Plan RecognitionCrossRefGoogle Scholar
  4. 4.
    Ijspeert, A.J., Nakanishi, J., Schaal, S.: Trajectory formation for imitation with nonlinear dynamical systems. In: Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 752–757. IEEE (2001)Google Scholar
  5. 5.
    Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., Osawa, E.: RoboCup: the robot world cup initiative. In: Proceedings of the First International Conference on Autonomous Agents, pp. 340–347. ACM (1997)Google Scholar
  6. 6.
    Kober, J., Peters, J.R.: Policy search for motor primitives in robotics. In: Advances in Neural Information Processing Systems, pp. 849–856 (2009)Google Scholar
  7. 7.
    Kormushev, P., Calinon, S., Caldwell, D.G.: Robot motor skill coordination with EM-based reinforcement learning. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3232–3237. IEEE (2010)Google Scholar
  8. 8.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  9. 9.
    Riley, P., Veloso, M.: On behavior classification in adversarial environments. In: Parker, L.E., Bekey, G., Barhen, J. (eds.) Distributed Autonomous Robotic Systems 4, pp. 371–380. Springer, Tokyo (2000).  https://doi.org/10.1007/978-4-431-67919-6_35CrossRefGoogle Scholar
  10. 10.
    Ross, S., Bagnell, J.A.: Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979 (2014)
  11. 11.
    Ross, S., Gordon, G.J., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: International Conference on Artificial Intelligence and Statistics, pp. 627–635 (2011)Google Scholar
  12. 12.
    Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRefGoogle Scholar
  13. 13.
    Tesauro, G., Galperin, G.R.: On-line policy improvement using Monte-Carlo search. In: NIPS, vol. 96, pp. 1068–1074 (1996)Google Scholar
  14. 14.
    Trevizan, F.W., Veloso, M.M.: Learning opponents strategies in the RoboCup small size league. In: Proceedings of the AAMAS, vol. 10. Citeseer (2010)Google Scholar
  15. 15.
    Yasui, K., Kobayashi, K., Murakami, K., Naruse, T.: Analyzing and learning an opponent’s strategies in the RoboCup small size league. In: Behnke, S., Veloso, M., Visser, A., Xiong, R. (eds.) RoboCup 2013. LNCS, vol. 8371, pp. 159–170. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-44468-9_15CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Francesco Riccio
    • 1
  • Roberto Capobianco
    • 1
  • Daniele Nardi
    • 1
  1. 1.Department of Computer, Control and Management Engineering “Antonio Ruberti”Sapienza University of RomeRomeItaly

Personalised recommendations