Abstract

We present a novel model-based algorithm, Cooperative Prioritized Sweeping, for sample-efficient learning in large multi-agent Markov decision processes. Our approach leverages domain knowledge about the structure of the problem in the form of a dynamic decision network. Using this information, our method learns a model of the environment to determine which state-action pairs are the most likely in need to be updated, significantly increasing learning speed. Batch updates can then be performed which efficiently back-propagate knowledge throughout the value function. Our method outperforms the state-of-the-art sparse cooperative Q-learning and QMIX algorithms, both on the well-known SysAdmin benchmark, randomized environments and a fully-observable variation of the well-known firefighter benchmark from Dec-POMDP literature.

Original languageEnglish
Title of host publicationProceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
PublisherIFAAMAS
Pages160-168
Number of pages9
ISBN (Electronic)978-1-4503-8307-3
Publication statusPublished - 2021
EventThe 20th International Conference on Autonomous Agents and Multiagent Systems - Virtual
Duration: 3 May 20217 May 2021
https://aamas2021.soton.ac.uk/

Conference

ConferenceThe 20th International Conference on Autonomous Agents and Multiagent Systems
Abbreviated titleAAMAS 2021
Period3/05/217/05/21
Internet address

Fingerprint

Dive into the research topics of 'Cooperative Prioritized Sweeping'. Together they form a unique fingerprint.

Cite this