Projects per year
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from the single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. When making a decision, just the expected return -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Our key insight is that we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time. In this paper, we propose Distributional Monte Carlo Tree Search, an algorithm that learns a posterior distribution over the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Moreover, our algorithm outperforms the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
|Title of host publication||The 20th International Conference on Autonomous Agents and Multiagent Systems|
|Number of pages||3|
|Publication status||Published - 3 May 2021|
|Event||The 20th International Conference on Autonomous Agents and Multiagent Systems - Virtual|
Duration: 3 May 2021 → 7 May 2021
|Name||Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS|
|Conference||The 20th International Conference on Autonomous Agents and Multiagent Systems|
|Abbreviated title||AAMAS 2021|
|Period||3/05/21 → 7/05/21|
FingerprintDive into the research topics of 'Distributional Monte Carlo Tree Search for Risk-Aware and Multi-Objective Reinforcement Learning'. Together they form a unique fingerprint.
- 1 Active