In multi-objective optimization, learning all the policies that reach Pareto-efficient solutions is an expensive process. The set of optimal policies can grow exponentially with the number of objectives, and recovering all solutions requires an exhaustive exploration of the entire state space. We propose Pareto Conditioned Networks (PCN), a method that uses a single neural network to encompass all non-dominated policies. PCN associates every past transition with its episode's return. It trains the network such that, when conditioned on this same return, it should reenact said transition. In doing so we transform the optimization problem into a classification problem. We recover a concrete policy by conditioning the network on the desired Pareto-efficient solution. Our method is stable as it learns in a supervised fashion, thus avoiding moving target issues. Moreover, by using a single network, PCN scales efficiently with the number of objectives. Finally, it makes minimal assumptions on the shape of the Pareto front, which makes it suitable to a wider range of problems than previous state-of-the-art multi-objective reinforcement learning algorithms.
|Titel||The 21st International Conference on Autonomous Agents and Multiagent Systems|
|Status||Accepted/In press - 9 mei 2022|
|Evenement||21st International Conference on Autonomous Agents and Multi-agent System - |
Duur: 9 mei 2022 → 13 mei 2022
|Conference||21st International Conference on Autonomous Agents and Multi-agent System|
|Periode||9/05/22 → 13/05/22|