Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective

Alessandra Calvi, Dimitris Kotzinos

Onderzoeksoutput: Conference paper

14 Citaten (Scopus)

Samenvatting

How to protect people from algorithmic harms? A promising solution,
although in its infancy, is algorithmic impact assessment
(AIA). AIAs are iterative processes used to investigate the possible
short and long terms societal impacts of AI systems before their
use, but with ongoing monitoring and periodic revisiting even after
their implementation. When conducted in a participatory and transparent
fashion, they could create bridges across the legal, social and
computer science domains, promoting the accountability of the entity
performing them as well as public scrutiny. They could enable
to re-attach the societal and regulatory context to the mathematical
definition of fairness, thus expanding the formalistic approach
thereto. Whilst the regulatory framework in the European Union
currently lacks the obligation to perform such AIA, some other
provisions are expected to play a role in AI development, leading
the way towards more widespread adoption of AIA. These include
the Data Protection Impact Assessment (DPIA) under the General
Data Protection Regulation (GDPR), the risk assessment process under
the Digital Services Act (DSA) and the Conformity Assessment
(CA) foreseen under the AI Regulation proposal.
In this paper, after briefly introducing the plurality of definitions
of fairness in the legal, social and computer science domains, and explaining
to which extent the current and upcoming legal framework
mandates the adoption of fairness metrics, we will illustrate how
AIA could create bridges between all these disciplines, allowing us
to build fairer AI solutions.We will then recognise the role of DPIA,
DSA risk assessment and CA by discussing the contributions they
can offer towards AIA but also identify the aspects lacking therein.
We will then identify how these assessment provisions could aid the
overall technical discussion of introducing and assessing fairness
in AI-based models and processes.
Originele taal-2English
TitelFAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
UitgeverijAssociation for Computing Machinery New York, NY, United States
Pagina's1229–1245
Aantal pagina's17
ISBN van elektronische versie979-8-4007-0192-4
DOI's
StatusPublished - 12 jun. 2023

Publicatie series

NaamACM International Conference Proceeding Series

Bibliografische nota

Funding Information:
This work by Alessandra Calvi has been funded under the EUTOPIA PhD co-tutelle programme 2021. Award number: EUTOPIA-PhD-2021-0000000127 OZRIFTM5.

Publisher Copyright:
© 2023 ACM.

Copyright:
Copyright 2023 Elsevier B.V., All rights reserved.

Vingerafdruk

Duik in de onderzoeksthema's van 'Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective'. Samen vormen ze een unieke vingerafdruk.

Citeer dit