Abstract

In multi-objective optimization, learning all the policies that reach Pareto-efficient solutions is an expensive process. The set of optimal policies can grow exponentially with the number of objectives, and recovering all solutions requires an exhaustive exploration of the entire state space. We propose Pareto Conditioned Networks (PCN), a method that uses a single neural network to encompass all non-dominated policies. PCN associates every past transition with its episode's return. It trains the network such that, when conditioned on this same return, it should reenact said transition. In doing so we transform the optimization problem into a classification problem. We recover a concrete policy by conditioning the network on the desired Pareto-efficient solution. Our method is stable as it learns in a supervised fashion, thus avoiding moving target issues. Moreover, by using a single network, PCN scales efficiently with the number of objectives. Finally, it makes minimal assumptions on the shape of the Pareto front, which makes it suitable to a wider range of problems than previous state-of-the-art multi-objective reinforcement learning algorithms.
Original languageEnglish
Title of host publicationThe 21st International Conference on Autonomous Agents and Multiagent Systems
PublisherIFAAMAS
Publication statusAccepted/In press - 9 May 2022
Event21st International Conference on Autonomous Agents and Multi-agent System -
Duration: 9 May 202213 May 2022
Conference number: 21
https://aamas2022-conference.auckland.ac.nz

Conference

Conference21st International Conference on Autonomous Agents and Multi-agent System
Abbreviated titleAAMAS
Period9/05/2213/05/22
Internet address

Fingerprint

Dive into the research topics of 'Pareto Conditioned Networks'. Together they form a unique fingerprint.

Cite this