Workshop on Online Misinformation- and Harm-Aware Recommender Systems

Co-located with RecSys 2020


September, 25th - Rio de Janeiro, Brazil

News: Proceedings are now available at CEUR!

Social media platforms have become an integral part of everyday life and activities of most people, providing new forms of communication and interaction. These sites allow their users to share information and opinions (in the form of photos, short texts and comments) as well as to promote the formation of links and social relationships (friendships, follower/followee relations). One of the most valuable features of social platforms is the potential for the dissemination of information on a large scale. Recommender systems play an important role in this process as they leverage on the massive user-generated content to assist users in finding relevant information as well as establishing new social relationships.

The adoption of social media, however, also exposes users to some risks, which can have a damaging effect on individuals and society at large. The unmoderated nature of social media sites often results in the appearance and distribution of false, confusing content (for example, hoaxes, conspiracy theories, false news and even satires), or even harmful content such as abusive, discriminatory and offensive comments, and incitement to acts of violence. In fact, the proliferation of misinformation and hate speech online has become a serious problem with several negative consequences, ranging from public health issues to the disruption of democratic systems.

As mediators of online information consumption, recommender systems are both affected by the proliferation of low-quality content in social media, which hinders their capacity of achieving accurate predictions, and, at the same time, become unintended means for the amplification and massive distribution of online harm. Some of these issues stem from the core concepts and assumptions recommender systems are based on. For example, the homophily principle according to which similar users are likely to be interested in the same items, might lead to information that users are already likely to know or agree with, yielding to the so-called "echo chambers". Assumptions like these can be naïve and exclusionary in the era of fake news and ideological uniformity.


In their attempt to deliver relevant and engaging suggestions about content/users, recommendation algorithms are prone to introduce biases. For example, popularity and homogeneity bias are based on the reliance in popular sources and social networks of like-minded individuals, correspondingly. These common biases limit the exposure of users to diverse points of view and make them vulnerable to manipulation by disinformation. Likewise, recommender systems can be affected by biases in the data (stemming from imbalanced data), the algorithms, and the user interaction or observation – with a focus on the biases related to relevance feedback loops (e.g., ranking).


Harnessing recommender systems with misinformation- and harm-awareness mechanisms become essential not only to mitigate the negative effects of the diffusion of unwanted content, but also to increase the user-perceived quality of recommender systems. Novel strategies like the diversification of recommendations, bias mitigation, model-level disruption, explainability and interpretation, among others, can help users in performing informed decision making in the context of online misinformation, hate speech and other forms of online harm.