A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist
説明

During the COVID-19 pandemic, the World Health Organization provided a checklist to help people distinguish between accurate and misinformation. In controlled experiments in the United States and Germany, we investigated the utility of this ordered checklist and designed an interactive version to lower the cost of acting on checklist items. Across interventions, we observe non-trivial differences in participants' performance in distinguishing accurate and misinformation between the two countries and discuss some possible reasons that may predict the future helpfulness of the checklist in different environments. The checklist item that provides source labels was most frequently followed and was considered most helpful. Based on our empirical findings, we recommend practitioners focus on providing source labels rather than interventions that support readers performing their own fact-checks, even though this recommendation may be influenced by the WHO's chosen order. We discuss the complexity of providing such source labels and provide design recommendations.

日本語まとめ
読み込み中…
読み込み中…
"You have to prove the threat is real": Understanding the needs of Female Journalists and Activists to Document and Report Online Harassment
説明

Online harassment is a major societal challenge that impacts multiple communities. Some members of community, like female journalists and activists, bear significantly higher impacts since their profession requires easy accessibility, transparency about their identity, and involves highlighting stories of injustice. Through a multi-phased qualitative research study involving a focus group and interviews with 27 female journalists and activists, we mapped the journey of a target who goes through harassment. We introduce PMCR framework, as a way to focus on needs for Prevention, Monitoring, Crisis and Recovery. We focused on Crisis and Recovery, and designed a tool to satisfy a target's needs related to documenting evidence of harassment during the crisis and creating reports that could be shared with support networks for recovery. Finally, we discuss users’ feedback to this tool, highlighting needs for targets as they face the burden and offer recommendations to future designers and scholars on how to develop tools that can help targets manage their harassment.

日本語まとめ
読み込み中…
読み込み中…
Method for Appropriating the Brief Implicit Association Test to Elicit Biases in Users
説明

Implicit tendencies and cognitive biases play an important role in how information is perceived and processed, a fact that can be both utilised and exploited by computing systems. The Implicit Association Test (IAT) has been widely used to assess people's associations of target concepts with qualitative attributes, such as the likelihood of being hired or convicted depending on race, gender, or age. The condensed version--the Brief IAT--aims to implicit biases by measuring the reaction time to concept classifications.

To use this measure in HCI research, however, we need a way to construct and validate target concepts, which tend to quickly evolve and depend on geographical and cultural interpretations. In this paper, we introduce and evaluate a new method to appropriate the BIAT using crowdsourcing to measure people's leanings on polarising topics. We present a web-based tool to test participants' bias on custom themes, where self-assessments often fail. We validated our approach with 14 domain experts and assessed the fit of crowdsourced test construction. Our method allows researchers of different domains to create and validate bias tests that can be geographically tailored and updated over time. We discuss how our method can be applied to surface implicit user biases and run studies where cognitive biases may impede reliable results.

日本語まとめ
読み込み中…
読み込み中…
Bridging Contextual and Methodological Gaps on the “Misinformation Beat”: Insights from Journalist-Researcher Collaborations at Speed
説明

As misinformation, disinformation, and conspiracy theories increase online, so does journalism coverage of these topics. This reporting is challenging, and journalists fill gaps in their expertise by utilizing external resources, including academic researchers. This paper discusses how journalists work with researchers to report on online misinformation. Through an ethnographic study of thirty collaborations, including participant-observation and interviews with journalists and researchers, we identify five types of collaborations and describe what motivates journalists to reach out to researchers — from a lack of access to data to support for understanding misinformation context. We highlight challenges within these collaborations, including misalignment in professional work practices, ethical guidelines, and reward structures. We end with a call to action for CHI researchers to attend to this intersection, develop ethical guidelines around supporting journalists with data at speed, and offer practical approaches for researchers filling a “data mediator” role between social media and journalists.

日本語まとめ
読み込み中…
読み込み中…
Birds of a Feather Don't Fact-check Each Other: Partisanship and the Evaluation of News in Twitter's Birdwatch Crowdsourced Fact-checking Program
説明

There is a great deal of interest in the role that partisanship, and cross-party animosity in particular, plays in interactions on social media. Most prior research, however, must infer users’ judgments of others’ posts from engagement data. Here, we leverage data from Birdwatch, Twitter’s crowdsourced fact-checking pilot program, to directly measure judgments of whether other users’ tweets are misleading, and whether other users’ free-text evaluations of third-party tweets are helpful. For both sets of judgments, we find that contextual features – in particular, the partisanship of the users – are far more predictive of judgments than the content of the tweets and evaluations themselves. Specifically, users are more likely to write negative evaluations of tweets from counter-partisans; and are more likely to rate evaluations from counter-partisans as unhelpful. Our findings provide clear evidence that Birdwatch users preferentially challenge content from those with whom they disagree politically. While not necessarily indicating that Birdwatch is ineffective for identifying misleading content, these results demonstrate the important role that partisanship can play in content evaluation. Platform designers must consider the ramifications of partisanship when implementing crowdsourcing programs.

日本語まとめ
読み込み中…
読み込み中…