Abstract
While autonomous artificial agents are assumed to perfectly execute the strategies they are programmed with, humans who design them may make mistakes. These mistakes may lead to a misalignment between the humans' intended goals and their agents' observed behavior, a problem of value alignment. Such an alignment problem may have particularly strong consequences when these autonomous systems are used in social contexts that involve some form of collective risk. By means of an evolutionary game theoretical model, we investigate whether errors in the configuration of artificial agents change the outcome of a collective-risk dilemma, in comparison to a scenario with no delegation. Delegation is here distinguished from no-delegation simply by the moment at which a mistake occurs: either when programming/choosing the agent (in case of delegation) or when executing the actions at each round of the game (in case of no-delegation). We find that, while errors decrease success rate, it is better to delegate and commit to a somewhat flawed strategy, perfectly executed by an autonomous agent, than to commit execution errors directly. Our model also shows that in the long-term, delegation strategies should be favored over no-delegation, if given the choice.
| Original language | English |
|---|---|
| Article number | 10460 |
| Number of pages | 13 |
| Journal | Scientific reports |
| Volume | 14 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 7 May 2024 |
Bibliographical note
© 2024. The Author(s).Keywords
- Humans
- Game Theory
- Models, Theoretical
- Risk
Fingerprint
Dive into the research topics of 'Committing to the wrong artificial delegate in a collective-risk dilemma is better than directly committing mistakes'. Together they form a unique fingerprint.-
VLAAI1: Flanders Artificial Intelligence Research program (FAIR) – second cycle
Nowe, A. (Administrative Promotor) & Vanderborght, B. (Co-Promotor)
1/01/24 → 31/12/28
Project: Applied
-
FWOAL933: DELICIOS: An integrated approach to study the delegation of conflict- of-interest decisions to autonomous agents.
Lenaerts, T. (Administrative Promotor), Simoens, P. (Co-Promotor) & Pierson, J. (Co-Promotor)
1/01/19 → 31/12/22
Project: Fundamental
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver