Deep reinforcement learning for large-scale epidemic control

Research output: Chapter in Book/Report/Conference proceedingConference paperResearch


Epidemics of infectious diseases are an important threat to public health and global economies.
Yet, the development of prevention strategies remains a challenging process.
For this reason, in this work, we investigate a deep reinforcement learning approach to automatically learn prevention strategies in an epidemiological model, in the context of pandemic influenza.
To this end, we construct a new epidemiological meta-population model, with 379 patches, that balances between model complexity and computational efficiency such that the use of reinforcement learning techniques becomes attainable.
First, we set up a ground truth such that we can evaluate the performance of the "Proximal Policy Optimization" algorithm to learn in a single district of this epidemiological model.
Next, we consider a larger scale problem, by conducting an experiment where we aim to learn a joint policy to control the districts in a community of 11 tightly coupled districts, for which no ground truth can be established.
This experiment shows that deep reinforcement learning can be used to learn mitigation policies in complex epidemiological models with a large state space.
Moreover, through this experiment, we demonstrate that there can be an advantage to consider collaboration between districts when designing prevention strategies.
Original languageEnglish
Title of host publicationProceedings of the Adaptive and Learning Agents Workshop 2020 (ALA2020) at AAMAS
Number of pages9
Publication statusAccepted/In press - 2020
Event2020 Adaptive Learning Agents workshop at AAMAS - Auckland, New Zealand
Duration: 9 May 202010 May 2020


Workshop2020 Adaptive Learning Agents workshop at AAMAS
Abbreviated titleALA 2020
CountryNew Zealand
Internet address

Fingerprint Dive into the research topics of 'Deep reinforcement learning for large-scale epidemic control'. Together they form a unique fingerprint.

Cite this