As the majority of news is consumed online, recommender systems play a pivotal role in curating the
constant influx of new content, leading to worries about reduced diversity—a vital aspect for
democratic societies. This research tackles these concerns by examining how nudges aimed at
encouraging diversity influence user behavior.
Despite demand for diverse perspectives, adoption of diversity-enhancing practices is constrained by
difficulties in operationalizing the concept and uncertainties around user acceptance. This study
suggests using Large Language Models to identify different viewpoints and create transparent
explanations that encourage users to engage with them. It employs a multi-stakeholder,
interdisciplinary methodology, merging elements from computational communication, social sciences,
and human-computer interaction to address this challenge.
The research objectives encompass theoretical advancements in understanding the persuasive
potential of explanations, methodological development of a scalable, user-friendly online
experimentation framework for social scientists to conduct realistic studies on digital communication,
and empirical investigation into the effects of explanation patterns on diversity consumption and user
experience. The outcomes will include practical design guidelines for recommender system designers
to facilitate the adoption of effective explanation strategies, thereby offering building blocks for
future-proof and evidence-based regulations.