Opponent Learning Awareness and Modelling in Multi-Objective Normal Form Games

Research output: Contribution to journalArticle

1 Citation (Scopus)


Many real-world multi-agent interactions consider multiple distinct criteria, i.e. the payoffs are multi-objective in nature. However, the same multi-objective payoff vector may lead to different utilities for each participant. Therefore, it is essential for an agent to learn about the behaviour of other agents in the system. In this work, we present the first study of the effects of such opponent modelling on multi-objective multi-agent interactions with nonlinear utilities. Specifically, we consider two-player multi-objective normal form games with nonlinear utility functions under the scalarised expected returns optimisation criterion. We contribute novel actor-critic and policy gradient formulations to allow reinforcement learning of mixed strategies in this setting, along with extensions that incorporate opponent policy reconstruction and learning with opponent learning awareness (i.e. learning while considering the impact of one’s policy when anticipating the opponent’s learning step). Empirical results in five different MONFGs demonstrate that opponent learning awareness and modelling can drastically alter the learning dynamics in this setting. When equilibria are present, opponent modelling can confer significant benefits on agents that implement it. When there are no Nash equilibria, opponent learning awareness and modelling allows agents to still converge to meaningful solutions that approximate equilibria.

Original languageEnglish
Number of pages23
JournalNeural Computing & Applications
Publication statusPublished - 19 Jun 2021
EventAdaptive and Learning Agents Workshop 2020 - Auckland University of Technology, Auckland, New Zealand
Duration: 9 May 202010 May 2020


Dive into the research topics of 'Opponent Learning Awareness and Modelling in Multi-Objective Normal Form Games'. Together they form a unique fingerprint.

Cite this