Recently, scholars across disciplines raised ethical, legal and social questions and concerns about the notion of human intervention , control, and oversight over AI systems . This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. Building on Nicholas Garnham’s theoretical framework of mediation , which encompasses human agents, technological tools, and systems of symbolic representation, this paper proposes a conceptual interpretation of human agency in mediation and redress within the use of opaque algorithmic systems. Distinguishing between active and passive human agency, this paper explains how individual and collective AI redress capacities may enhance active human agency and user empowerment in AI mediation. To picture the difference between active and passive human agency and to highlight the significance of AI redress mechanisms, we analyse the case of automated generation of non-consensual, pornographic content on digital platforms such as messaging apps. As shown in the case study, the prevailing phenomenon of passive human agency in AI mediation calls for increasing empowerment of users by redressing AI-mediated decisions as means to foster active human agency. We provide socio-technical and policy recommendations for enabling active redress mechanisms. Ultimately, we identify routes for future theoretical and empirical research on active human agency in times of ubiquitous AI.
|Journal||AI & Society|
|Publication status||Accepted/In press - 2021|
Bibliographical noteSpecial issue on AI for People
- human agency
- artificial intelligence
- AI mediation