Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


The article provides a legal overview of biased AI systems in clinical genetics and genomics. For the overview, two perspectives to look at bias are taken into consideration: societal and statistical. The paper explores how biases can be defined in these two perspectives and how generally they can be classified. Based on two perspectives, the paper explores three negative consequences of biases in AI systems: discrimination and stigmatization (as the more societal concepts) and inaccuracy of AI’s decisions (more related to the statistical perception of bias). Each of these consequences is analyzed within the frameworks they correspond to. Recognizing inaccuracy as harm caused by biased AI systems is one of the most important contributions of the article. It is argued that once identified, bias in an AI system indicates possible inaccuracy in its outcomes. The article demonstrates it through the analysis of the medical devices framework: if it is applicable to AI applications used in genomics and genetics, how it defines bias, and what are the requirements to prevent them. The paper also looks at how this framework can work together with anti-discrimination and stigmatization rules, especially in the light of the upcoming general legal framework on AI. The authors conclude that all the frameworks shall be considered for fighting against bias in AI systems because they reflect different approaches to the nature of bias and thus provide a broader range of mechanisms to prevent or minimize them.
Original languageEnglish
Pages (from-to)155-174
Number of pages20
JournalEuropean Pharmaceutical Law Review
Issue number4
Publication statusPublished - 2021


Dive into the research topics of 'Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics'. Together they form a unique fingerprint.

Cite this