Emergent Cooperation under Uncertain Incentive Alignment

Nicole Orzan, Erman Acar, Davide Grossi, Roxana Radulescu

Research output: Chapter in Book/Report/Conference proceedingConference paper

Abstract

Understanding the emergence of cooperation in systems of computational agents is crucial for the development of effective cooperative AI. Interaction among individuals in real-world settings are often sparse and occur within a broad spectrum of incentives, which often are only partially known. In this work, we explore how cooperation can arise among reinforcement learning agents in scenarios characterised by infrequent encounters, and where agents face uncertainty about the alignment of their incentives with those of others. To do so, we train the agents under a wide spectrum of environments ranging from fully competitive, to fully cooperative, to mixed-motives. Under this type of uncertainty we study the effects of mechanisms, such as reputation and intrinsic rewards, that have been proposed in the literature to foster cooperation in mixed-motives environments. Our findings show that uncertainty substantially lowers the agents' ability to engage in cooperative behaviour, when that would be the best course of action. In this scenario, the use of effective reputation mechanisms and intrinsic rewards boosts the agents' capability to act nearly-optimally in cooperative environments, while greatly enhancing cooperation in mixed-motive environments as well.
Original languageEnglish
Title of host publicationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages1521-1530
Number of pages10
ISBN (Electronic)979-8-4007-0486-4
ISBN (Print)979-8-4007-0486-4
DOIs
Publication statusPublished - May 2024

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
ISSN (Print)1548-8403

Cite this