Counterspeech, i.e., direct responses against hate speech, has become an important tool to address the increasing amount of hate online while avoiding censorship. Although AI has been proposed to help scale up counterspeech efforts, this raises questions of how exactly AI could assist in this process, since counterspeech is a deeply empathetic and agentic process for those involved. In this work, we aim to answer this question, by conducting in-depth interviews with 10 extensively experienced counterspeakers and a large scale public survey with 342 everyday social media users. In participant responses, we identified four main types of barriers and AI needs related to resources, training, impact, and personal harms. However, our results also revealed overarching concerns of authenticity, agency, and functionality in using AI tools for counterspeech. To conclude, we discuss considerations for designing AI assistants that lower counterspeaking barriers without jeopardizing its meaning and purpose.
https://doi.org/10.1145/3613904.3642025
Content creators (e.g., gamers, activists, vloggers) with marginalized identities are at-risk of experiencing hate and harassment. This paper examines the ableist hate and harassment that disabled content creators experience on social media. Through surveys (N=50) and interviews (N=20) with disabled creators, we developed a taxonomy of 11 types of ableist hate and harassment (e.g., eugenics-related speech, denial and stigmatization of accessibility) and outlined how ableism harms creators’ well-being and content creation practices. Using statistical modeling, we investigated differences in ableist experiences given creators’ intersecting identities such as race and sexuality. We found that LGBTQ disabled creators face significantly more ableist hate compared to non-LGBTQ disabled creators. Lastly, we discuss our findings through an infrastructure lens to highlight how disabled creators experience platform-enabled ableism, undergo labor to cope with hate, and develop strategies to safeguard against future hate.
https://doi.org/10.1145/3613904.3641949
A majority of people experience trauma, spurring calls to incorporate trauma-informed approaches (TIA) from public health and social work into technology design. While technologies touted as trauma-informed are starting to propagate the literature, there persists a gap in knowledge around how design teams apply TIA and qualify their technology as adhering to trauma-informed principles. We address this through a 12-month development project with trauma and sexual violence experts to produce Ube, a data donation platform for collecting online dating sexual consent data to improve sexual risk detection AI. Through analysis of design documentation we retrospectively articulate a trauma-informed design process that evolved through the course of Ube’s development, comprising three elements for integrating trauma-informed principles: design goals that adapt the definition of TIA to the application domain, design activities that map to trauma-informed principles, and consequent design choices. We conclude with methodological recommendations to improve trauma-informed design processes.
https://doi.org/10.1145/3613904.3642045
The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker," to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.
https://doi.org/10.1145/3613904.3642148
Due to the limitations imposed by the COVID-19 pandemic, customers have shifted their shopping patterns from offline to online. Livestream shopping has become popular as one of the online shopping media. However, various streamers’ malicious selling behaviors have been reported. In this research, we sought to explore streamers’ malicious selling strategies and understand how viewers perceive these strategies. First, we recorded 40 livestream shopping sessions from two popular livestream platforms in China—Taobao, and TikTok. We identified 16 malicious selling strategies that were used to deceive, coerce, or manipulate viewers and found that platform designs enhanced nine of the malicious selling strategies. Second, through an interview study with 13 viewers, we report three challenges of overcoming malicious selling in relation to imbalanced power between viewers, streamers, and the platforms. We conclude by discussing the policy and design implications of countering malicious selling.