Reference:

Joni Pajarinen and Jaakko Peltonen. Efficient Planning for Factored Infinite-Horizon DEC-POMDPs. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pages 325–331. AAAI Press, July 2011.

Abstract:

Decentralized partially observable Markov decision processes (DEC-POMDPs) are used to plan policies for multiple agents that must maximize a joint reward function but do not communicate with each other. The agents act under uncertainty about each other and the environment. This planning task arises in optimization of wireless networks, and other scenarios where communication between agents is restricted by costs or physical limits. DEC-POMDPs are a promising solution, but optimizing policies quickly becomes computationally intractable when problem size grows. Factored DEC-POMDPs allow large problems to be described in compact form, but have the same worst case complexity as non-factored DEC-POMDPs. We propose an efficient optimization algorithm for large factored infinite-horizon DEC-POMDPs. We formulate expectation-maximization based optimization into a new form, where complexity can be kept tractable by factored approximations. Our method performs well, and it can solve problems with more agents and larger state spaces than state of the art DEC-POMDP methods. We give results for factored infinite-horizon DEC-POMDP problems with up to 10 agents.

Suggested BibTeX entry:

@inproceedings{pajarinen11b,
    author = {Pajarinen, Joni and Peltonen, Jaakko},
    booktitle = {Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI)},
    language = {eng},
    month = {July},
    pages = {325-331},
    publisher = {AAAI Press},
    title = {{Efficient Planning for Factored Infinite-Horizon DEC-POMDPs}},
    year = {2011},
}

See ijcai.org ...