Camargo J.A., Barrios-Aranibar D. (2016) \(\delta \)-Radius Unified Influence Value Reinforcement Learning. In: Omatu S. et al. (eds) Distributed Computing and Artificial Intelligence, 13th International Conference. Advances in Intelligent Systems and Computing, vol 474. Springer, Cham
Nowadays Decentralized Partial Observable Markov Decision Process framework represents the actual state of art in Multi-Agent System. Dec-POMDP incorporates the concepts of independent view and message exchange to the original POMDP model, opening new possibilities about the independent views for each agent in the system. Nevertheless there are some limitations about the communication.
About communication on MAS, Dec-POMDP is still focused in the message structure and content instead of the communication relationship between agents, which is our focus. On the other hand, the convergence on MAS is about the group of agents convergence as a whole, to achieve it the collaboration between the agents is necessary.
The collaboration and/or communication cost in MAS is high, in computational cost terms, to improve this is important to limit the communication between agents to the only necessary cases.
The present approach is focused in the impact of the communication limitation on MAS, and how it may improve the use of system resources, by reducing computational, without harming the global convergence. In this sense \(\delta \)-radius is a unified algorithm, based on Influence Value Reinforcement Learning and Independent Learning models, that allows restriction of the communication by the variation of \(\delta \).
Multi-Agent Systems Artificial Intelligence Markov Decision Process