Workshop Schedule

Session: Misinformation and Social Impact (60min)

  • Introduction: 10min
  • Position Papers 4x(5+5min)min: 40min
  • Break-out session for block topic discussion: 10min

Coffee /Tea /Beer Break (10min)

Session: Mitigation Strategies (55min)

  • Consolidation of Session 1: 5min
  • Position Papers 4x(5+5)min: 40min
  • Break-out session for block topic discussion: 10min

Bathroom Break (5min)

Activity: Bias Detection (60min)

  • Consolidation of Session 2: 5min
  • Activity: 55min

Coffee /Tea /Beer Break (10min)

Session: Applications (60min)

  • Consolidation of Session 3: 5min
  • Position Papers 3x(5+5)min 30min
  • Break-out session for block topic discussion: 10min
  • Consolidation of Session 4: 5min

Closing & Wrap Up (10 min)

Position Papers

      Brendan Spillane (Trinity College Dublin)
      Vincent Wade (Trinity College Dublin)

There are many difficulties in studying bias at the production, dissemination, or consumption stages of the news pipeline. These include the difficulty of identifying high quality empirical research, the lack of agreed terminology and definitions, and the overlapping nature of many forms of bias. Much of the empirical research in the domain is disjointed and there are few examples of concerted efforts to address overarching research challenges. This paper details ongoing work to create a classification of biases relating to news. It is divided into three sub-classifications focusing on the production, dissemination, and consumption stages of the news pipeline.

      Saleem Masadeh (New Mexico State University)
      Bill Hamilton (New Mexico State University)

Social media platforms are increasingly shaping political discourse by providing a platform for new generations of political commentators and serving as a primary means for the distribution of news. However, the business models of these platforms, largely driven by advertising, may be driving the introduction of bias both at systemic and personal cognitive levels. In this work, we briefly discuss how the YouTube Platform is potentially promoting bias through a combination of platform designs, policies, and recommendation algorithms. Finally, we discuss design ideas for augmenting the user experience of YouTube that may work to mitigate these biases.

      Michael Dickard (The Vanguard Group)

The effects of news media and misinformation are not only important in the political domain, but also in other ambiguous and high-stakes situations such as finance. Individual investors are particularly prone to cognitive biases that lead to poor investment decisions, which algorithms and design may either worsen or reduce. Although a growing number of studies have explored cognitive biases among individual investors in behavioral economics, little research has explored how investing applications and new mobile devices impact decision-making, and how we might detect, reduce, or inoculate investors against their own biases. In this position paper I argue that researchers should focus on the effects of aggregated news, network effects, and data visualization on investor decision-making. I review some relevant literature and discuss ways that researchers can continue to explore these topics.

      Waheeb Yaqub (The University of Sydney)
      Micah Goldwater (The University of Sydney)
      Judy Kay (The University of Sydney)

Science news is important since many people use it to make important decisions for diverse aspects of their lives. This paper analyses how cognitive biases affect the processes that produce scientific news, particularly those processes that contribute to the unintentional creation of junk science news. We present a stakeholder analysis for the processes involved in the dissemination and consumption of scientific results, identifying key cognitive biases that play a role in the dissemination of incorrect information. We then explore ways to augment interfaces used through those processes. We consider how to design interface elements to help people become aware of the cognitive biases that contribute to creation and dissemination of junk science. That awareness can be a foundation for each stakeholder group to avoid unintentional contributions to this problem. Our work provides a foundation for new research to create personalised interface mechanisms and elements at each stage in the news dissemination process, to help people contribute to improving the quality of scientific news and to recognise junk science news.

      Simon Buckingham Shum (University of Technology Sydney)
      Cherie Lucas (University of Technology Sydney)

As citizens are confronted by major societal changes, they find their assumptions being challenged and their identities threatened, bringing the risk that they retreat to like-minded ‘bubbles’ rather than ask whether they might have something to learn. Algorithmically driven media platforms exacerbate this process by amplifying cognitive biases and polarizing debate. This paper argues for a distinctive role that Artificial Intelligence (AI) can play, by holding up a metaphorical ‘mirror’ to online writers, with carefully designed feedback making them more aware of, and reflective about, their reactions and approaches to challenging situations. As an example, we describe a webapplication that uses Natural Language Processing to annotate written accounts of personal responses to challenging experiences, highlighting where the author appears to be reflecting shallowly or deeply. This open source tool is already in use by students to help them make sense of work placement challenges they encounter, but could find wider application. Our vision is that such tools could make citizens more self-aware of their biases, making them less reactive, and more open to new perspectives when their assumptions are challenged.

      Andrew W. Vargo (Keio University)
      Benjamin Tag (University of Melbourne)

Peer-production systems have been used to facilitate the creation of high-quality data products such as encyclopedias, repositories of technical information and software code, and creative efforts like cookbooks and literature. A proposed extension of the peer-production model is the creation and provision of bias analysis. An ideal application could provide users with a well-defined bias score of content or consumption. However, peer-production systems themselves can be vulnerable to bias and manipulation. Therefore, in this paper, we discuss recommendations for constructing a working peer-production model and provide an overview of the challenges and realities. In order to ground the discussion, we present the "SANCTUARY" framework, a distributed peer-production system for fostering bias awareness among online content consumers.

      Po-Ming Law (Georgia Institute of Technology)
      Sana Malik (Adobe Research)
      Fan Du (Adobe Research)
      Moumita Sinha (Adobe Research)

Machine learning models often make predictions that bias against certain subgroups of input data. When undetected, machine learning biases can constitute significant financial and ethical implications. Semi-automated tools that involve humans in the loop could facilitate bias detection. Yet, little is known about the considerations involved in their design. In this paper, we report on an interview study with 11 machine learning practitioners for investigating the needs surrounding semi-automated bias detection tools. Based on the findings, we highlight four considerations in designing to guide system designers who aim to create future tools for bias detection.

      Christina Schneegass (LMU Munich)
      Fiona Draxler (LMU Munich)

Cognitive biases can consciously and subconsciously affect the way we store and recall previously learned information. In the use case of mobile learning, biases such as the Negative Suggestion Effect (NSE) can make us think a statement is correct because we wrongfully selected it in a previous multiple-choice test. In some cases, the suggestion effect is so persistent that even corrections can not stop us from drawing assumptions based on the misinformation we once learned. This effect is called the Continued Influence Bias (CIB). To avoid the creation of such incorrect and sometimes persistent memories, learning applications need to be designed carefully. In this position paper, we discuss the influence of the presented number of answer options, feedback, and lesson design on the strength of the NSE and CIB and provide recommendations for countermeasures.

      Senuri Wijenayake (University of Melbourne)
      Niels van Berkel (Aalborg University)
      Jorge Goncalves (University of Melbourne

Experimenter-induced influences can trigger biased responses from research participants.We evaluate how digital bots can be used as an alternative research tool to mitigate these biases, as based on existing literature. We note that the conversational interactivity provided by bots can significantly reduce biased responses and satisficing behaviour, while simultaneously enhancing disclosure and facilitating scalability. Bots can also build rapport with participants and explain tasks at hand as well as a human experimenter, with the added benefit of anonymity. However, bots often follow a predetermined script when conversing and therefore may not be able to handle complex and unstructured conversations, which could frustrate users. Studies also imply that bots with human-like features may induce experimenter effects as similar to humans. We conclude with a discussion on how bots could be designed for optimal utilisation in research.

      Steven R. Rick (UC San Diego)
      Erin Beneteau (University of Washington)
      Regina Casanova-Perez (University of Washington)
      Cezanne Lane (University of Washington)
      Colleen Emmenegger (UC San Diego)
      Janice Sabin (University of Washington)
      Wanda Pratt (University of Washington)
      Andrea Hartzler (University of Washington)
      Nadir Weibel (UC San Diego)

Cognitive bias is pervasive in healthcare. It drives differential diagnosis and timely recognition of acute onset illness, but it also contributes to healthcare inequity. Patients may not be treated equitably due to different identities (race, gender, socio-economic status, etc) or different diseases (obesity, diabetes, hypertension, etc). In our work we investigate if biased behaviors between patients and providers can be detected through a technique known as Social Signal Processing. Our project explores how computational sensing can be used to identify behavior biases, and if it can promote improved patient-provider communication, ultimately reducing health disparities for low income, racially diverse patients in primary care. Through a partnership with academic and community-based health systems in Seattle and San Diego, we aim to characterize behavior between providers and patients, develop a behavior sensing tool, design interventional feedback, and evaluate how effective that tool and feedback are at improving patient-provider communication. We believe that this approach will lead to new techniques for shaping the next generation of healthcare providers and educators, helping them better promote healthcare access, quality, and equity.

      Sadafumi Tonomoto (Osaka Prefecture University)
      Motoi Iwata (Osaka Prefecture University)
      Koichi Kise (Osaka Prefecture University)

Extensive reading is a way of improving language ability by just reading a lot. Although there are some recommended rules to follow, it is sometimes difficult to sustain the user’s engagement in reading. It would be helpful to have the technology for sustainment. In this paper we report our first trial to establish nudging strategies for this purpose. From the experiments with 29 participants for 25 days in total, we have found that setting the goal of the amount of reading, as well as sharing the goal with a peer group of users with a similar ability is effective nudging strategies. Although the goal setting is not effective to the majority of users, we have been successful to prepare a prescription by which users with positive effects are selected based on their personal traits and the amount of reading. From the results, we are able to approach to personalized nudging strategies.

\\