Practical AI Transparency —Revealing Datafication and Algorithmic Identities

Research output: Contribution to journalArticle

109 Downloads (Pure)


How does one do research on algorithms and their outputs when confronted with the inherent algorithmic opacity and black box-ness as well as with the limitations of API-based research and the data access gaps imposed by platforms’ gate-keeping practices? This article outlines the methodological steps we undertook to manoeuvre around theabove-mentioned obstacles. It is a “byproduct” of our investigation into datafication and the way how algorithmic identities are being produced for personalisation, ad delivery and recommendation. Following Paßmann and Boersma’s (2017) suggestion for pursuing “practical transparency” and focusing on particular actors, we experiment with different avenues of research. We develop and employ anapproach of letting the platforms speakand making the platforms speak. In doing so, we also use non-traditional research tools, such as transparency and regulatory tools, and repurpose them as objects of/for study. Empirically testing the applicability of this integrated approach, we elaborate on the possibilities it offers for the study of algorithmic systems, while being aware and cognizant of its limitations and shortcomings.
Original languageEnglish
Pages (from-to)84-125
Number of pages41
JournalJournal of Digital Social Research
Issue number3
Publication statusPublished - 9 Nov 2020


  • datafication; algorithmic identity; practical transparency;methodology; digital methods; subject access request


Dive into the research topics of 'Practical AI Transparency —Revealing Datafication and Algorithmic Identities'. Together they form a unique fingerprint.

Cite this