Network Distributed POMDP with Communication
- 467 Downloads
While Distributed POMDPs have become popular for modeling multiagent systems in uncertain domains, it is the Network Distributed POMDPs (ND-POMDPs) model that has begun to scale-up the number of agents. The ND-POMDPs can utilize the locality in agents’ interactions. However, prior work in ND-POMDPs has failed to address communication. Without communication, the size of a local policy at each agent within the ND-POMDPs grows exponentially in the time horizon. To overcome this problem, we extend existing algorithms so that agents periodically communicate their observation and action histories with each other. After communication, agents can start from new synchronized belief state. Thus, we can avoid the exponential growth in the size of local policies at agents. Furthermore, we introduce an idea that is similar the Point-based Value Iteration algorithm to approximate the value function with a fixed number of representative points. Our experimental results show that we can obtain much longer policies than existing algorithms as long as the interval between communications is small.
KeywordsMultiagent System Belief State Local Policy Representative Point Heuristic Function
Unable to display preview. Download preview PDF.
- 1.Bernstein, D.S., Zilberstein, S., Immerman, N.: The complexity of decentralized control of markov decision processes. In: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence (UAI 2000), pp. 32–37 (2000)Google Scholar
- 2.Szer, D., Francois Charpillet, S.Z.: MAA*: A heuristic search algorithm for solving decentralized POMDPs. In: Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI 2005), pp. 576–590 (2005)Google Scholar
- 3.Nair, R., Roth, M., Yokoo, M., Tambe, M.: Communication for improving policy computation in distributed pomdps. In: Proceedings of the Third International joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2004), pp. 1096–1103 (2004)Google Scholar
- 4.Nair, R., Varakantham, P., Tambe, M., Yokoo, M.: Networked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs. In: Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI 2005), pp. 133–139 (2005)Google Scholar
- 5.Varakantham, P., Marecki, J., Yabu, Y., Tambe, M., Yokoo, M.: Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2007), pp. 822–829 (May 2007)Google Scholar
- 6.Goldman, C.V., Zilberstein, S.: Optimizing information exchange in cooperative multi-agent systems. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2003), pp. 137–144 (2003)Google Scholar
- 7.Roth, M., Simmons, R., Veloso, M.: Exploiting factored representations for decentralized execution in multiagent teams. In: Proceedings of the 6th International joint conference on Autonomous agents and Multi-agent Systems (AAMAS 2007), pp. 457–463 (2007)Google Scholar
- 8.Shen, J., Becker, R., Lesser, V.: Agent interaction in distributed pomdps and its implications on complexity. In: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems (AAMAS 2006), pp. 529–536 (2006)Google Scholar
- 10.Yokoo, M., Hirayama, K.: Distributed breakout algorithm for solving distributed constraint satisfaction problems. In: Proceeding of the Second International Conference on Multiagent Systems (ICMAS 1996), pp. 401–408 (1996)Google Scholar