Advertisement

A Distributed Cooperative Reinforcement Learning Method for Decision Making in Fire Brigade Teams

  • Abbas Abdolmaleki
  • Mostafa Movahedi
  • Nuno Lau
  • Luís Paulo Reis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7500)

Abstract

Decision making in complex, multi-agent and dynamic environments such as disaster spaces is a challenging problem in Artificial Intelligence. This research paper aims at developing distributed coordination and cooperation method based on reinforcement learning to enable team of homogeneous, autonomous fire fighter agents, with similar skills to accomplish complex task allocation, with emphasis on firefighting tasks in disaster space. The main contribution is applying reinforcement learning to solve the bottleneck caused by dynamicity and variety of conditions in such situations as well as improving the distributed coordination of fire fighter agent’s to extinguish fires within a disaster zone. The proposed method increases the speed of learning; it has very low memory usage and has a good scalability and robustness in the case that the number of agents and complexity of task increases. The effectiveness of the proposed method is shown through simulation results.

Keywords

RoboCup Rescue Simulation Multi agent system Fire Brigade Decision Making Reinforcement Learning 

References

  1. 1.
    Reis, L.P., Lau, N., Oliveira, E.C.: Situation Based Strategic Positioning for Coordinating a Team of Homogeneous Agents. In: Hannebauer, M., Wendler, J., Pagello, E. (eds.) Reactivity and Deliberation in MAS. LNCS (LNAI), vol. 2103, pp. 175–197. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  2. 2.
    Stone, P., Veloso, M.: Task Decomposition, Dynamic Role Assignment, and Low-Bandwidth Communication for Real-Time Strategic Teamwork. Artificial Intelligence 110(2), 241–273 (1999)CrossRefzbMATHGoogle Scholar
  3. 3.
    Stone, P., Veloso, M.: Layered approach to learning client behaviors in the robocup soccer server. Applied Artificial Intelligence 12(2-3) (1998)Google Scholar
  4. 4.
    Kleiner, A., Dietl, M., Nebel, B.: Towards a Life-Long Learning Soccer Agent. In: Kaminka, G.A., Lima, P.U., Rojas, R. (eds.) RoboCup 2002. LNCS (LNAI), vol. 2752, pp. 126–134. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  5. 5.
    Jennings, N.R.: Controlling Cooperative Problem Solving in Industrial Multiagent Systems using Joint Intentions. Artificial Intelligence 75(2), 195–240 (1995)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Lekavý, M., Návrat, P.: Expressivity of STRIPS-Like and HTN-Like Planning. In: Nguyen, N.T., Grzech, A., Howlett, R.J., Jain, L.C. (eds.) KES-AMSTA 2007. LNCS (LNAI), vol. 4496, pp. 121–130. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Bredenfeld, A., Jacoff, A., Noda, I., Takahashi, Y. (eds.): RoboCup 2005. LNCS (LNAI), vol. 4020. Springer, Heidelberg (2006)Google Scholar
  8. 8.
    Lakemeyer, G., Sklar, E., Sorrenti, D.G., Takahashi, T. (eds.): RoboCup 2006. LNCS (LNAI), vol. 4434. Springer, Heidelberg (2007)Google Scholar
  9. 9.
    Visser, U., Ribeiro, F., Ohashi, T., Dellaert, F. (eds.): RoboCup 2007. LNCS (LNAI), vol. 5001. Springer, Heidelberg (2008)Google Scholar
  10. 10.
    Kok, J.R., Spaan, M.T.J., Vlassis, N.: Multi-Robot Decision Making using Coordination Graphs. In: Proceedings of 11th International Conference on Advanced Robotics (ICAR), Coimbra, Portugal, pp. 1124–1129 (2003)Google Scholar
  11. 11.
    Paquet, S., Bernier, N., Chaib-draa, B.: Comparison of different coordination strategies for the roboCupRescue simulation. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS (LNAI), vol. 3029, pp. 987–996. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Mohammadi, Y.B., Tazari, A., Mehrandezh, M.: A new hybrid task sharing method for cooperative multi agent systems. In: Canadian Conf. on Electrical and Computer Engineering (May 2005)Google Scholar
  13. 13.
    Martínez, I.C., Ojeda, D., Zamora, E.A.: Ambulance decision support using evolutionary reinforcement learning in robocup rescue simulation league. In: Lakemeyer, G., Sklar, E., Sorrenti, D.G., Takahashi, T. (eds.) RoboCup 2006. LNCS (LNAI), vol. 4434, pp. 556–563. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Paquet, S., Bernier, N., Chaib-draa, B.: From global selective perception to local selective perception. In: AAMAS, pp. 1352–1353 (2004)Google Scholar
  15. 15.
    Amraii, S.A., Behsaz, B., Izadi, M.: S.o.s 2004: An attempt towards a multi-agent rescue team. In: Proc. 8th RoboCup Int’l Symposium (2004)Google Scholar
  16. 16.
    Abdolmaleki, A., Movahedi, M., Salehi, S., Lau, N., Reis, L.P.: A Reinforcement Learning Based Method for Optimizing the Process of Decision Making in Fire Brigade Agents. In: Antunes, L., Pinto, H.S. (eds.) EPIA 2011. LNCS, vol. 7026, pp. 340–351. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  17. 17.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Abbas Abdolmaleki
    • 1
    • 2
  • Mostafa Movahedi
    • 1
  • Nuno Lau
    • 1
    • 3
  • Luís Paulo Reis
    • 2
    • 4
  1. 1.IEETA – Institute of Electronics and Telematics Engineering of AveiroPortugal
  2. 2.LIACC – Artificial Intelligence and Computer Science Lab.PortoPortugal
  3. 3.UA – University of AveiroAveiroPortugal
  4. 4.EEUM - School of EngineeringUniversity of Minho - DSIGuimarãesPortugal

Personalised recommendations