Scalarized Multi-Objective Reinforcement Learning: Novel Design Techniques

Kristof Van Moffaert, Madalina Drugan, Ann Nowe

Research output: Chapter in Book/Report/Conference proceedingMeeting abstract (Book)

Abstract

In multi-objective problems, it is key to find compromising solutions that balance different objectives. The linear scalarization function is often utilized to translate the multi-objective nature of a problem into a standard, single-objective problem. Generally, it is noted that such as linear combination can only find solutions in convex areas of the Pareto front, therefore making the method inapplicable in situations where the shape of the front is not known beforehand. We propose a non-linear scalarization function, called the Chebyshev scalarization function in multi-objective reinforcement learning. We show that the Chebyshev scalarization method overcomes the flaws of the linear scalarization function and is able to discover all Pareto optimal solutions in non-convex environments.
Original languageEnglish
Title of host publicationProceedings of the 25th Benelux Conference on Artificial Intelligence
EditorsKoen Hindriks, Mathijs de Weerdt, Birna van Riemsdijk, Martijn Warnier
Pages360-361
Number of pages2
Publication statusPublished - Nov 2014
Event25th Benelux Conference on Artificial Intelligence - Delft, Netherlands
Duration: 7 Nov 20138 Nov 2013

Publication series

NameBenelux Conference on Artificial Intelligence Proceedings
ISSN (Electronic)1568-7805

Conference

Conference25th Benelux Conference on Artificial Intelligence
CountryNetherlands
CityDelft
Period7/11/138/11/13

Bibliographical note

Koen Hindriks, Mathijs de Weerdt, Birna van Riemsdijk, and Martijn Warnier

Keywords

  • reinforcement learning
  • multi-objective
  • scalarization

Fingerprint

Dive into the research topics of 'Scalarized Multi-Objective Reinforcement Learning: Novel Design Techniques'. Together they form a unique fingerprint.

Cite this