How does one do research on algorithms and their outputs when confronted with the inherent algorithmic opacity and black box-ness as well as with the limitations of API-based research and the data access gaps imposed by platforms’ gate-keeping practices? This article outlines the methodological steps we undertook to manoeuvre around theabove-mentioned obstacles. It is a “byproduct” of our investigation into datafication and the way how algorithmic identities are being produced for personalisation, ad delivery and recommendation. Following Paßmann and Boersma’s (2017) suggestion for pursuing “practical transparency” and focusing on particular actors, we experiment with different avenues of research. We develop and employ anapproach of letting the platforms speakand making the platforms speak. In doing so, we also use non-traditional research tools, such as transparency and regulatory tools, and repurpose them as objects of/for study. Empirically testing the applicability of this integrated approach, we elaborate on the possibilities it offers for the study of algorithmic systems, while being aware and cognizant of its limitations and shortcomings.
- datafication; algorithmic identity; practical transparency;methodology; digital methods; subject access request