Background

Serious concerns have been raised about algorithms and social computing systems manipulating opinion and decision making, and their suspected contribution to the creation of so-called echo chambers and spread of disinformation. Only recently, researchers have started investigating the unintended consequences of such systems. By optimizing recommendation systems, for example, around the contents’ ‘stickiness’ and keeping users’ attention, these algorithms tend to reinforce people’s biases and subsequently contribute to an increasing polarization.

The prevalence of cognitive biases make users of such systems susceptible to various types of manipulation. Prominent examples include confirmation biases (predominantly seeking out information that confirms existing views), cognitive dissonance (repudiation of information that does not fit into preconceived notions), or the so-called Von-Restorff-Effect, which states that a particular item that sticks out is more likely to be remembered than other items. Such biases can be found in both individuals and organizations and are a crucial obstacle for rational, logical discussions about polarizing topics.

A significant contributor to this problem is the algorithm-driven infrastructure of today's media landscape. Social Media (e.g., Twitter or Facebook) as well as content distribution services (e.g., YouTube)--financed through advertisements--are compelled to find ways to keep users engaged for as long as possible. A very effective mechanism is the implementation of learning algorithms that select and recommend additional contents based on prior user selections and predicted interests. These algorithms cater to the users' interests and simultaneously nurture inherent biases. Especially when it comes to polarizing topics, such as climate change, immigration, abortion rights or gun control, the selective distribution of information to receptive users fosters reinforcement of opinions and leads to the development of so-called Filter Bubbles. Users are often unaware of roaming inside such bubbles, which results in biases prevailing without the user being explicitly aware of them.

The goal of this workshop is to re-think the incentive structures and mechanisms of social computing systems with particular regard to news media and people's cognitive biases. We will discuss the design, implementation, and effects of sensor techniques and computing systems that detect and mitigate biases and subsequently invite users to reflect on their views, acquire and advance media literacy and build critical thinking skills. By focusing on cognitive biases from a content or system perspective as well as from a human perspective, we intend to sketch out blueprints for systems that contribute to a more informed public discourse and depolarization by design. The nature and scope of this workshop will bring together researchers across disciplines, including cognitive psychology, philosophy, information retrieval, and HCI. The goal is to establish a research agenda around cognitive biases, their detection, utilization, and possible fortification as a response to emerging changes in societal and public discourse contributed to by recent technological and political developments.