Background

A lot of misinformation is roaming the Internet these days. Characterized as factually incorrect information that is intentionally manipulated to deceive the receiver, misinformation often challenges our ability to tell fake from truth. With ties to the war propaganda of the early 20th century, fake news are often accompanied by the fabrication of facts and manipulation of narratives. In comparison to the most extreme and infamous examples of misinformation campaigns conducted by the German Empire during World War 2, the majority of recent misinformation campaigns may not appear to be as harmful. However, today, misinformation can be spread almost instantaneously around the globe, reaching larger audiences than ever before, resulting in unprecedented consequences. Recently, during the COVID-19 pandemic, the message that industrial alcohol kills the COVID-19 virus was circulated over various platforms and resulted in the death of around 480 people, as well as sickness of thousands. New technology has eased the distribution of misinformation, and enabled governments, organisations, and individuals to influence the public opinion or convince the public of a specific agenda, a traditional goal of misinformation campaigns. However, the same technology also offers governments and organisations new avenues to detecting and correcting false information.

Even though a wide range of tools and approaches to detect and debunk misinformation exists, the answer to why people believe in false information remains complex. Research by Lewandowsky and colleagues has found that misinformation is often presented as a conspiracy theory making people less likely to accept accurate information from official sources. Similar findings were presented by Van der Linden et al., who exposed participants to both factually correct statements and misinformation and found no change in belief of the participants. The authors concluded that the mere presence of misinformation can cause people to believe in those. Another aspect that needs consideration is a lack of expertise in a specific domain, e.g., science. Layman's knowledge makes it more difficult to make informed decisions, thus non-experts often fall back to simplifying their judgement criteria. Cook and colleagues uses critical thinking strategies, e.g., method aiming at revealing the deceptive arguments within false claims, to inform consumers of misinformation.

Compared to these, traditional inoculation theory proposes the vaccination of people against persuasive misinformation by building immunity. The irony could not be greater, as the most popular misinformation campaign of our days is grounded in a (now retracted) paper suggesting that the measles, mumps, and rubella (MMR) vaccine potentially causes developmental disorder in children. As in medicine, effective inoculation requires the consumption of weakened misinformation prior to the exposure to the potential misinformation, while requiring the presentation of opposing claims. This latter part bears the risk of causing a backfire effect. In case a person has already been convinced by misinformation, the presentation of the opposing argument often results in a counter-reaction of enforcement of the pre-existing beliefs. As misinformation has long existed in the public sphere, inoculation strategies have to be smart and more targeted.

The reasons for these phenomena can be found in the foundations of human cognition. When encountering new information, we are prone to assessing this new information for its affinity with our existing knowledge, which requires cognitive effort. Whereas confirming one's preexisting beliefs creates positive affect, dealing with opposing information triggers a rather negative affective response, as postulated by the cognitive dissonance theory. Even if technology achieves to accurately detect and correct misinformation, a plethora of pitfalls inhibits the potential corrective effects. Often, by directly confronting people with misinformation in order to debunk them, facing a confirmation of existing knowledge, no matter if right or wrong, leads to further solidification of the confirmed belief, simply due to the increased amount of evidence at hand.

The very same technologies that are used to collect large amounts of personal information and to target users’ cognitive vulnerabilities also offer intelligent solutions to the problem of misinformation. Pattern recognition and Natural Language Processing have made fact-checking applications and spam filters more accurate and reliable. Machine Learning, big data, and context-aware computing systems can be used to detect misinformation in-situ, and provide cognitive security. Today, these self-learning systems protect the user and prevent misinformation from finding fertile ground.

Researchers and practitioners in Human-Computer Interaction (HCI) are at the forefront of designing and developing user-facing computing systems. Consequently, we bear special responsibility for working on solutions to mitigate problems arising from misinformation and bias-enforcing interfaces.

This workshop brings together designers, developers, and thinkers across disciplines to redefine computing systems by focusing on technologies, such as recommender and social computing systems. The workshop aims at fostering the development of applications that detect and limit the spread of misinformation as well as interfaces that instil and nurture critical thinking in their users. By focusing on the problem of misinformation and users’ cognitive security from a computing and HCI perspective, this workshop will sketch out blueprints for systems and interfaces that contribute to advancing media literacy, fostering critical thinking skills, and helping users telling fact from fake.