Abstract
Although the EDPB has been criticised for the vagueness of Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models, this paper argues that the EDPB had no choice but to reject the unfounded argument that the mere storage of a ML model never amounts to personal data processing.
It also shows that while the EDPB Opinion does not clearly draw conclusions from the consideration of current LLM development practices, which are reliant upon massive web scraping and a lack of preliminary data tagging—it does not misrepresent any issues. Instead, it seeks to strike a balance, leaving the door open to the potential use of state-of-the-art base models for low-risk applications and the anonymisation of AI models as a means to address unlawful development practices.
It also shows that while the EDPB Opinion does not clearly draw conclusions from the consideration of current LLM development practices, which are reliant upon massive web scraping and a lack of preliminary data tagging—it does not misrepresent any issues. Instead, it seeks to strike a balance, leaving the door open to the potential use of state-of-the-art base models for low-risk applications and the anonymisation of AI models as a means to address unlawful development practices.
Original language | English |
---|---|
Number of pages | 5 |
Journal | Privacy and Data Protection |
Volume | 25 |
Issue number | 6 |
Publication status | Published - 2025 |