Projects per year
Abstract
How to protect people from algorithmic harms? A promising solution,
although in its infancy, is algorithmic impact assessment
(AIA). AIAs are iterative processes used to investigate the possible
short and long terms societal impacts of AI systems before their
use, but with ongoing monitoring and periodic revisiting even after
their implementation. When conducted in a participatory and transparent
fashion, they could create bridges across the legal, social and
computer science domains, promoting the accountability of the entity
performing them as well as public scrutiny. They could enable
to re-attach the societal and regulatory context to the mathematical
definition of fairness, thus expanding the formalistic approach
thereto. Whilst the regulatory framework in the European Union
currently lacks the obligation to perform such AIA, some other
provisions are expected to play a role in AI development, leading
the way towards more widespread adoption of AIA. These include
the Data Protection Impact Assessment (DPIA) under the General
Data Protection Regulation (GDPR), the risk assessment process under
the Digital Services Act (DSA) and the Conformity Assessment
(CA) foreseen under the AI Regulation proposal.
In this paper, after briefly introducing the plurality of definitions
of fairness in the legal, social and computer science domains, and explaining
to which extent the current and upcoming legal framework
mandates the adoption of fairness metrics, we will illustrate how
AIA could create bridges between all these disciplines, allowing us
to build fairer AI solutions.We will then recognise the role of DPIA,
DSA risk assessment and CA by discussing the contributions they
can offer towards AIA but also identify the aspects lacking therein.
We will then identify how these assessment provisions could aid the
overall technical discussion of introducing and assessing fairness
in AI-based models and processes.
although in its infancy, is algorithmic impact assessment
(AIA). AIAs are iterative processes used to investigate the possible
short and long terms societal impacts of AI systems before their
use, but with ongoing monitoring and periodic revisiting even after
their implementation. When conducted in a participatory and transparent
fashion, they could create bridges across the legal, social and
computer science domains, promoting the accountability of the entity
performing them as well as public scrutiny. They could enable
to re-attach the societal and regulatory context to the mathematical
definition of fairness, thus expanding the formalistic approach
thereto. Whilst the regulatory framework in the European Union
currently lacks the obligation to perform such AIA, some other
provisions are expected to play a role in AI development, leading
the way towards more widespread adoption of AIA. These include
the Data Protection Impact Assessment (DPIA) under the General
Data Protection Regulation (GDPR), the risk assessment process under
the Digital Services Act (DSA) and the Conformity Assessment
(CA) foreseen under the AI Regulation proposal.
In this paper, after briefly introducing the plurality of definitions
of fairness in the legal, social and computer science domains, and explaining
to which extent the current and upcoming legal framework
mandates the adoption of fairness metrics, we will illustrate how
AIA could create bridges between all these disciplines, allowing us
to build fairer AI solutions.We will then recognise the role of DPIA,
DSA risk assessment and CA by discussing the contributions they
can offer towards AIA but also identify the aspects lacking therein.
We will then identify how these assessment provisions could aid the
overall technical discussion of introducing and assessing fairness
in AI-based models and processes.
Original language | English |
---|---|
Title of host publication | FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency |
Publisher | Association for Computing Machinery New York, NY, United States |
Pages | 1229–1245 |
Number of pages | 17 |
ISBN (Electronic) | 979-8-4007-0192-4 |
DOIs | |
Publication status | Published - 12 Jun 2023 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Bibliographical note
Funding Information:This work by Alessandra Calvi has been funded under the EUTOPIA PhD co-tutelle programme 2021. Award number: EUTOPIA-PhD-2021-0000000127 OZRIFTM5.
Publisher Copyright:
© 2023 ACM.
Copyright:
Copyright 2023 Elsevier B.V., All rights reserved.
Fingerprint
Dive into the research topics of 'Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective'. Together they form a unique fingerprint.Projects
- 1 Active
Activities
- 1 Talk at a school event
-
Protecting fundamental rights in the age of AI: what role for impact assessments?
Alessandra Calvi (Invited speaker)
13 Nov 2023Activity: Talk or presentation › Talk at a school event