Abstract
This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations. Drawing on the "averageness theory," which suggests a relationship between a face’s attractiveness and the human ability to ascertain its gender, we explore the potential propagation of
human bias into artificial intelligence (AI) systems. Utilising the AI model Stable Diffusion 2.1, we have created a dataset containing various connotations of attractiveness to test whether the correlation between attractiveness and accuracy in gender classification observed in human cognition persists within AI. Our findings indicate that akin to human dynamics, AI systems exhibit variations in gender classification accuracy based on attractiveness, mirroring social prejudices and stereotypes in their algorithmic decisions. This discovery underscores the
critical need to consider the impacts of human perceptions on data collection and highlights the necessity for a multidisciplinary and intersectional approach to AI development and AI data training. By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and
fairness under the scope of the AIAct and GDPR, reaffirming how psychological and feminist legal theories can offer valuable insights for ensuring the protection of gender equality and non-discrimination in AI systems.
human bias into artificial intelligence (AI) systems. Utilising the AI model Stable Diffusion 2.1, we have created a dataset containing various connotations of attractiveness to test whether the correlation between attractiveness and accuracy in gender classification observed in human cognition persists within AI. Our findings indicate that akin to human dynamics, AI systems exhibit variations in gender classification accuracy based on attractiveness, mirroring social prejudices and stereotypes in their algorithmic decisions. This discovery underscores the
critical need to consider the impacts of human perceptions on data collection and highlights the necessity for a multidisciplinary and intersectional approach to AI development and AI data training. By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and
fairness under the scope of the AIAct and GDPR, reaffirming how psychological and feminist legal theories can offer valuable insights for ensuring the protection of gender equality and non-discrimination in AI systems.
Original language | English |
---|---|
Title of host publication | “My Kind of Woman”: Analysing Gender Stereotypes in AI through the Averageness Theory and EU law |
Publisher | IAIL 2024- HHAI 2024 woskhop- CEUR 2024 |
Publication status | Published - 10 Jun 2024 |
Event | IAIL 2024- HHAI 2024 woskhop- CEUR 2024 - Duration: 10 Jun 2024 → … |
Workshop
Workshop | IAIL 2024- HHAI 2024 woskhop- CEUR 2024 |
---|---|
Period | 10/06/24 → … |
Bibliographical note
Doh, M., & Karagianni, A. (2024). "My Kind of Woman": A Feminist Legal Perspective of Gender Stereotypes and AI Bias through the Averageness Theory and EU Law. In “My Kind of Woman”: Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law IAIL 2024- HHAI 2024 workshop- CEUR 2024Keywords
- gender bias
- facial analysis
- generative AI
- EU AI Act
- GDPR
- data fairness