Emergent Cooperation and Deception in Public Good Games

Nicole Orzan, Erman Acar, Davide Grossi, Roxana Radulescu

Research output: Unpublished contribution to conferenceUnpublished paper

Abstract

Communication is a widely used mechanism to promote cooperation in multi-agent systems. In the field of emergent communication agents are usually trained on a particular type of environment: cooperative, competitive, or mixed-motive. Motivated by the idea that real-world settings are characterised by incomplete information and that humans face daily interactions under a wide spectrum of incentives, we hypothesise that emergent communication could be simultaneously exploited in the totality of these scenarios.
In this work we pursue this line of research by focusing on social dilemmas, and develop an extended version of the Public Goods Game which allows us to train independent reinforcement learning agents simultaneously on different scenarios where incentives are aligned (or misaligned) to various extents.
Additionally, we introduce uncertainty regarding the alignment of incentives, and we equip agents with the ability to learn a communication policy, to study the potential of emergent communication for overcoming uncertainty.
We show that in settings where all agents have the same level of uncertainty, communication can help improve the cooperation level of the system, while, when uncertainty is asymmetric, certain agents learn to use communication to deceive and exploit their uncertain peers.
Original languageEnglish
Number of pages9
Publication statusPublished - May 2023
Event2023 Adaptive and Learning Agents Workshop at AAMAS - London, United Kingdom
Duration: 29 May 202330 May 2023
https://alaworkshop2023.github.io

Workshop

Workshop2023 Adaptive and Learning Agents Workshop at AAMAS
Abbreviated titleALA 2023
CountryUnited Kingdom
CityLondon
Period29/05/2330/05/23
Internet address

Cite this