Samenvatting
The integration of AI systems in decision-making processes often promises increased
efficiency, accuracy, and objectivity through automation and standardisation. However, the
deployment of these systems has raised critical concerns regarding fairness, transparency,
and accountability. Biased algorithms have the potential to cause allocative and representa-
tional harms through discrimination and stereotyping, consequently exacerbating social
inequalities.
Despite the presence of anti-discrimination laws, the detection of algorithmic bias is
challenging due to the inherent opaqueness of AI systems, limited access to datasets
and models, the use of proxies to encode protected characteristics, and the prevalence
of intersectional discrimination. Examining the interplay between AI technologies and
decision-making processes is crucial in this context.
In this dissertation, we address immediate and long-term harms caused by AI systems
interacting with our world. We argue for a paradigm shift towards equality of treatment,
particularly through developing input-based group fairness evaluation methods. Further-
more, we provide a taxonomy of AI-driven feedback loops and their potential long-term
consequences. Finally, to ensure better utilisation of these tools, we have composed policy
recommendations and introduced a design thinking approach for developers.
efficiency, accuracy, and objectivity through automation and standardisation. However, the
deployment of these systems has raised critical concerns regarding fairness, transparency,
and accountability. Biased algorithms have the potential to cause allocative and representa-
tional harms through discrimination and stereotyping, consequently exacerbating social
inequalities.
Despite the presence of anti-discrimination laws, the detection of algorithmic bias is
challenging due to the inherent opaqueness of AI systems, limited access to datasets
and models, the use of proxies to encode protected characteristics, and the prevalence
of intersectional discrimination. Examining the interplay between AI technologies and
decision-making processes is crucial in this context.
In this dissertation, we address immediate and long-term harms caused by AI systems
interacting with our world. We argue for a paradigm shift towards equality of treatment,
particularly through developing input-based group fairness evaluation methods. Further-
more, we provide a taxonomy of AI-driven feedback loops and their potential long-term
consequences. Finally, to ensure better utilisation of these tools, we have composed policy
recommendations and introduced a design thinking approach for developers.
Originele taal-2 | English |
---|---|
Toekennende instantie |
|
Begeleider(s)/adviseur |
|
Datum van toekenning | 7 sep 2023 |
Status | Published - 2023 |