Samenvatting

In multi-objective optimization, learning all the policies that reach Pareto-efficient solutions is an expensive process. The set of optimal policies can grow exponentially with the number of objectives, and recovering all solutions requires an exhaustive exploration of the entire state space. We propose Pareto Conditioned Networks (PCN), a method that uses a single neural network to encompass all non-dominated policies. PCN associates every past transition with its episode's return. It trains the network such that, when conditioned on this same return, it should reenact said transition. In doing so we transform the optimization problem into a classification problem. We recover a concrete policy by conditioning the network on the desired Pareto-efficient solution. Our method is stable as it learns in a supervised fashion, thus avoiding moving target issues. Moreover, by using a single network, PCN scales efficiently with the number of objectives. Finally, it makes minimal assumptions on the shape of the Pareto front, which makes it suitable to a wider range of problems than previous state-of-the-art multi-objective reinforcement learning algorithms.
Originele taal-2English
TitelThe 21st International Conference on Autonomous Agents and Multiagent Systems
UitgeverijIFAAMAS
StatusAccepted/In press - 9 mei 2022
Evenement21st International Conference on Autonomous Agents and Multi-agent System -
Duur: 9 mei 202213 mei 2022
Congresnummer: 21
https://aamas2022-conference.auckland.ac.nz

Conference

Conference21st International Conference on Autonomous Agents and Multi-agent System
Verkorte titelAAMAS
Periode9/05/2213/05/22
Internet adres

Vingerafdruk

Duik in de onderzoeksthema's van 'Pareto Conditioned Networks'. Samen vormen ze een unieke vingerafdruk.

Citeer dit