Exploring Gender Dimensions in High-Risk AI Systems: Moving Towards a Gender Privacy Impact Assessment within the EU AI Act

Onderzoeksoutput: Conference paper

Samenvatting

The rapid integration of Artificial Intelligence (AI) across diverse sectors presents significant opportunities for innovation, yet it also raises concerns about its potential to perpetuate discrimination, particularly against women and marginalised groups. Cases such as Amazon's recruitment tool, Deliveroo's rider-ranking algorithm, and AI applications in healthcare, whereby AI systems introduced gender bias disproportionately affecting women and reinforcing existing inequalities, illustrate the pressing need for a more nuanced regulatory approach to AI.
While the vast amounts of personal data processed, including information about safety, gender identity, and health, undoubtedly could undermine women's privacy in AI systems, the examples above also highlight the importance of disaggregated data to prevent bias.
At the same time, women’s privacy in AI system is not jeopardised exclusively by personal data processing, as illustrated by the growing creation and distribution of non-consensual sexualised deepfakes. Other than violating personal privacy, such content also perpetuates gendered harms, illustrating the necessity of implementing effective privacy protection mechanisms to prevent exploitation and abuse in the digital landscape.
Academic literature across multiple domains has explored the challenges AI-driven decision-making poses to fundamental rights, especially non-discrimination, gender equality, and privacy. These concerns are also reflected in the European Union's recent AI Act (AIA), which proposes solutions such as the Risk Management System (RMS) (Art. 9 AIA) and the Fundamental Rights Impact Assessment (FRIA) (Art. 27 AIA) to safeguard fundamental rights threatened by high-risk AI systems. However, questions remain regarding the effectiveness of these mechanisms in preventing gender-based discrimination, protecting women's privacy, and promoting gender equality in the context of AI.
Against this backdrop, we advocate for the introduction of frameworks prioritising gender sensitivity, inclusivity, and transparency, such as Gendered Privacy Impact Assessment (GPIA), a tool specifically designed to assess the impacts of AI systems on gender equality and privacy, which can guide AI design to ensure that its deployment aligns with the protection of fundamental rights, especially those of women.
Originele taal-2English
TitelPrivacy Law Scholars Conference 2025
StatusUnpublished - 2025

Citeer dit