Voluntary safety commitments provide an escape from over-regulation in AI development

The Anh Han, Tom Lenaerts, Francisco C. Santos, Luís Moniz Pereira

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)
1 Downloads (Pure)

Abstract

With the introduction of Artificial Intelligence (AI) and related technologies in our daily lives, fear and anxiety about their misuse as well as their inherent biases, incorporated during their creation, have led to a demand for governance and associated regulation. Yet regulating an innovation process that is not well understood may stifle this process and reduce benefits that society may gain from the generated technology, even under the best intentions. Instruments to shed light on such processes are thus needed as they can ensure that imposed policies achieve the ambitions for which they were designed. Starting from a game-theoretical model that captures the fundamental dynamics of a race for domain supremacy using AI technology, we show how socially unwanted outcomes may be produced when sanctioning is applied unconditionally to risk-taking, i.e. potentially unsafe, behaviours. We demonstrate here the potential of a regulatory approach that combines a voluntary commitment approach reminiscent of soft law, wherein technologists have the freedom of choice between independently pursuing their course of actions or establishing binding agreements to act safely, with either a peer or governmental sanctioning system of those that do not abide by what they pledged. As commitments are binding and sanctioned, they go beyond the classic view of soft law, akin more closely to actual law-enforced regulation. Overall, this work reveals how voluntary but sanctionable commitments generate socially beneficial outcomes in all scenarios envisageable in a short-term race towards domain supremacy through AI technology. These results provide an original dynamic systems perspective of the governance potential of enforceable soft law techniques or co-regulatory mechanisms, showing how they may impact the ambitions of developers in the context of the AI-based applications.

Original languageEnglish
Article number101843
JournalTechnology in Society
Volume68
DOIs
Publication statusPublished - Feb 2022

Bibliographical note

Funding Information:
The authors acknowledge the feedback from Mireille Hildebrandt and Gregory Lewkowicz on issues related with soft law and the necessity of "harder" law in AI governance. T.A.H., L.M.P., T.L. are supported by Future of Life Institute grant RFP2-154. T.A.H. is also supported by a Leverhulme Research Fellowship ( RF-2020-603/9 ). LM.P. support by NOVA-LINCS ( UIDB/04516/2020 ) with the financial support of FCT-Funda\c c\~ao para a Ci\^encia e a Tecnologia, Portugal, through national funds. F.C.S. acknowledges support from FCT Portugal (grants UIDB/50021/2020 , PTDC/MAT-APL/6804/2020 and PTDC/CCI-INF/7366/2020 ). T.L benefits from the support by the Flemish Government through the AI Research Program and F.C.S and T.L. by TAILOR, a project funded by EU Horizon 2020 research and innovation program under GA No 952215 . T.L. is furthermore supported by the F.N.R.S. project with grant number 31257234 , the F.W.O. project with grant nr. G.0391.13N, the FuturICT 2.0(\url{ www.futurict2.eu }) project funded by the FLAG-ERA JCT 2016 and the Service Public de Wallonie Recherche under grant n\textdegree\ 2010235–ARIAC by DigitalWallonia4.ai.

Publisher Copyright:
© 2021

Copyright:
Copyright 2022 Elsevier B.V., All rights reserved.

Keywords

  • AI development Race
  • Commitments
  • Evolutionary game theory
  • Incentives
  • Safety

Fingerprint

Dive into the research topics of 'Voluntary safety commitments provide an escape from over-regulation in AI development'. Together they form a unique fingerprint.

Cite this