Abstract
This paper examines the European Regulation (EU) 2024/1689 [1] on harmonised rules on artificial intelligence, commonly known as AI Act, through a feminist lens, analysing how the proposed regulatory framework addresses gender, non-discrimination, and systemic power imbalances. Drawing on Miranda Fricker’s theory of hermeneutical injustice [2], Catharine MacKinnon’s feminist legal theory on male dominance [3], Aníbal Quijano’s concept of "coloniality of power"[4] and Walter Mignolo’s theory of epistemology and the decoloniality of law [5], the paper critiques the AI Act’s approach to gender bias and discrimination. The findings argue that while the AI Act seeks to mitigate gendered risks, it falls short of addressing the structural biases embedded in AI technologies, which disproportionately harm marginalised groups.
Following an introduction that highlights the significance of this research, the paper provides a background on the formulation of the AI Act’s final text and outlines the methodological approach used to select key provisions for analysis The main section critically examines specific articles of the AI Act with gendered implications, demonstrating how existing provisions either reinforce or fail to challenge algorithmic discrimination. The conclusion underscores the necessity of stronger mechanisms to address gender-based inequities in AI development and deployment from an intersectional perspective. The conclusion underscores the necessity of stronger mechanisms to address gender-based inequities in AI development and deployment from an intersectional perspective. The paper closes by proposing feminist-informed revisions to the AI Act that emphasise gender inclusivity, intersectionality, and accountability in AI governance, advocating for a more equitable AI framework that reflects the lived experiences of women, LGBTQIA+ people and marginalised communities.
Following an introduction that highlights the significance of this research, the paper provides a background on the formulation of the AI Act’s final text and outlines the methodological approach used to select key provisions for analysis The main section critically examines specific articles of the AI Act with gendered implications, demonstrating how existing provisions either reinforce or fail to challenge algorithmic discrimination. The conclusion underscores the necessity of stronger mechanisms to address gender-based inequities in AI development and deployment from an intersectional perspective. The conclusion underscores the necessity of stronger mechanisms to address gender-based inequities in AI development and deployment from an intersectional perspective. The paper closes by proposing feminist-informed revisions to the AI Act that emphasise gender inclusivity, intersectionality, and accountability in AI governance, advocating for a more equitable AI framework that reflects the lived experiences of women, LGBTQIA+ people and marginalised communities.
Original language | English |
---|---|
Article number | E25 |
Pages (from-to) | 1-18 |
Number of pages | 18 |
Journal | Cambridge Forum on AI. Law and Governance |
Issue number | 1 |
DOIs | |
Publication status | Published - 2025 |
Keywords
- AI Act
- structural inequalities
- intersectionality
- gendered risks
- feminism