Multiagent planning under uncertainty with stochastic communication delays
| Authors |
|
|---|---|
| Publication date | 2008 |
| Host editors |
|
| Book title | Proceedings of the eighteenth International Conference on Automated Planning and Scheduling |
| ISBN |
|
| Event | ICAPS 2008 |
| Pages (from-to) | 338-345 |
| Publisher | Menlo Park, CA: AAAI Press |
| Organisations |
|
| Abstract |
We consider the problem of cooperative multiagent planning under uncertainty, formalized as a decentralized partially observable Markov decision process (Dec-POMDP). Unfortunately, in these models optimal planning is provably intractable. By communicating their local observations before they take actions, agents synchronize their knowledge of the environment, and the planning problem reduces to a centralized POMDP. As such, relying on communication significantly reduces the complexity of planning. In the real world however, such communication might fail temporarily. We present a step towards more realistic communication models for Dec-POMDPs by proposing a model that: (1) allows that communication might be delayed by one or more time steps, and (2) explicitly considers future probabilities of successful communication. For our model, we discuss how to efficiently compute an (approximate) value function and corresponding policies, and we demonstrate our theoretical results with encouraging experiments.
|
| Document type | Conference contribution |
| Published at | http://www.science.uva.nl/research/isla/pub/Spaan08icaps.pdf |
| Downloads | |
| Permalink to this page | |