Antisocial Computing

会議の名前
CSCW2021
"How over is it?" Understanding the Incel Community on YouTube
要旨

YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform also hosts inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content is the so-called Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non-Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content.

著者
Kostantinos Papadamou
Cyprus University of Technology, Limassol, Cyprus
Savvas Zannettou
Max Planck Institute, Saarbrücken, Germany
Jeremy Blackburn
Binghamton University, Binghamton, New York, United States
Emiliano De Cristofaro
University College London, London, United Kingdom
Gianluca Stringhini
Boston University, Boston, Massachusetts, United States
Michael Sirivianos
Cyprus University of Technology, Nicosia, Cyprus
論文URL

https://doi.org/10.1145/3479556

動画
Educators, Solicitors, Flamers, Motivators, Sympathizers: Characterizing Roles in Online Extremist Movements
要旨

Social media provides the means by which extremist social movements, such as white supremacy and anti-LGBTQ, thrive online. Yet, we know little about the roles played by the participants of such movements. In this paper, we investigate these participants to characterize their roles, their role dynamics, and their influence in spreading online extremism. Our participants—online extremist accounts—are 4,876 public Facebook pages or groups that have shared information from the websites of 289 Southern Poverty LawCenter (SPLC) designated extremist groups. Guided by theories of participatory activism, we map the information sharing features of these extremists accounts. By clustering the quantitative features followed by qualitative expert validation, we identify five roles surrounding extremist activism—educators, solicitors, flamers, motivators, sympathizers. For example, solicitors use links from extremist websites to attract donations and participation in extremist issues, whereas flamers share inflammatory extremist content inciting anger. We further investigate role dynamics such as, how stable these roles are over time and how likely will extremist accounts transition from one role into another. We find that roles core to the movement—educators and solicitors—are more stable, while flamers and motivators can transition to sympathizers with high probability. Finally, using a Hawkes process model, we test which roles are more influential in spreading various types of information. We find that educators and solicitors exert the most influence in triggering extremist link posts, whereas flamers are influential in triggering the spread of information from fake news sources. Our results help in situating various roles on the trajectory of deeper engagement into the extremist movements and understanding the potential effect of various counter-extremism interventions. Our findings have implications for understanding how online extremist movements flourish through participatory activism and how they gain a spectrum of allies for mobilizing extremism online.

著者
Shruti Phadke
University of Washington, Seattle, Washington, United States
Tanushree Mitra
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3476051

動画
Do Platform Migrations Harm the Effectiveness of Content Moderation? Evidence from r/The_Donald and r/Incels
要旨

When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The\_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The\_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.

受賞
Honorable Mention
著者
Manoel Horta Ribeiro
EDIC, Lausanne, Switzerland
Shagun Jhaver
University of Washington, Seattle, Washington, United States
Savvas Zannettou
Max Planck Institute, Saarbrücken, Germany
Jeremy Blackburn
Binghamton University, Binghamton, New York, United States
Emiliano De Cristofaro
University College London, London, United Kingdom
Gianluca Stringhini
Boston University, Boston, Massachusetts, United States
Robert West
EPFL, Lausanne, Switzerland
論文URL

https://doi.org/10.1145/3476057

動画
Moving with the Times: Investigating the Alt-Right Network Gab with Temporal Interaction Graphs
要旨

Gab is an online social network often associated with the alt-right political movement and users barred from other networks. It presents an interesting opportunity for research because near-complete data is available from day one of the network’s creation. In this paper, we investigate the evolution of the user interaction graph, that is the graph where a link represents a user interacting with another user at a given time. We view this graph both at different times and at different timescales. The latter is achieved by using sliding windows on the graph which gives a novel perspective on social network data. The Gab network is relatively slowly growing over the period of months but subject to large bursts of arrivals over hours and days. We identify plausible events that are of interest to the Gab community associated with the most obvious such bursts. The network is characterised by interactions between ‘strangers’ rather than by reinforcing links between ‘friends’. Gab usage follows the diurnal cycle of the predominantly US and Europe based users. At off-peak hours the Gab interaction network fragments into sub-networks with absolutely no interaction between them. A small group of users are highly influential across larger timescales, but a substantial number of users gain influence for short periods of time. Temporal analysis at different timescales gives new insights above and beyond what could be found on static graphs.

著者
Naomi A.. Arnold
Queen Mary University of London, London, United Kingdom
Benjamin Steer
Queen Mary University of London, London, London, United Kingdom
Imane Hafnaoui
Queen Mary University of London, London, London, United Kingdom
Hugo A. Parada G.
Universidad Politécnica de Madrid, Madrid, Spain
Raul J Mondragon
Queen Mary University of London, London, London, United Kingdom
Felix Cuadrado
Universidad Politécnica de Madrid, Madrid, Spain
Richard Clegg
Queen Mary University of London, London, United Kingdom
論文URL

https://doi.org/10.1145/3479591

動画
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
要旨

With the widespread use of toxic language online, platforms are increasingly using automated systems that leverage advances in natural language processing to automatically flag and remove toxic comments. However, most automated systems---while detecting and moderating toxic language---do not provide feedback to their users, let alone provide an avenue of recourse for users to make actionable changes. We present our work, RECAST, an interactive, open-sourced web tool for visualizing these models' toxic predictions, while providing alternative suggestions for flagged toxic language and a new path of recourse for users. RECAST highlights text responsible for classifying toxicity, and allows users to interactively substitute potentially toxic phrases with neutral alternatives. We examined the effect of RECAST via two large-scale user evaluations, and find that RECAST was highly effective at helping users reduce toxicity as detected through the model, and users gain a stronger understanding of the underlying toxicity criterion used by black-box models, enabling transparency and recourse. In addition we found that when users focus on optimizing language for these models instead of their own judgement (which is the implied incentive and goal of deploying such models at all) these models cease to be effective classifiers of toxicity compared to human annotations. This opens a discussion for how toxicity detection models work and should work, and their effect on future discourse.

著者
Austin P. Wright
Georgia Institute of Technology , Atlanta , Georgia, United States
Omar Shaikh
Georgia Institute of Technology, Atlanta, Georgia, United States
Haekyu Park
Georgia Institute of Technology, Atlanta, Georgia, United States
Will Epperson
Georgia Institute of Technology, Atlanta, Georgia, United States
Muhammed Ahmed
Mailchimp, Atlanta, Georgia, United States
Stephane Pinel
Mailchimp, Atlanta, Georgia, United States
Duen Horng Chau
Georgia Tech, Atlanta, Georgia, United States
Diyi Yang
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3449280

動画
Punishment and Its Discontents: An Analysis of Permanent Ban in an Online Game Community
要旨

Online platforms such as multiplayer online games often struggle with widespread toxic behaviors such as flaming, harassment, and hate speech. To discipline their users, platforms usually adopt a punitive approach that issues punishments ranging from a warning message to content removal to permanent ban (PB). As the harshest punishment that a platform does to its user, PB deprives the user of their privileges on the platform, such as account access and purchased content. But little is known regarding the disciplinary effect of PB on the user community. In this study, we analyzed PB in League of Legends, one of the largest online games today. We identified five distinct player discourses regarding PB, revealing how PB is only nominally a disciplinary device, and functions primarily as a platform rhetoric. Our findings amounted to the recognition of a restorative approach, and more specifically, the need to contextualize toxicity.

著者
Yubo Kou
Pennsylvania State University, State College, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3476075

動画
Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter
要旨

Deplatforming refers to the permanent ban of controversial public figures with large followings on social media sites. In recent years, platforms like Facebook, Twitter and YouTube have deplatformed many influencers to curb the spread of offensive speech. We present a case study of three high-profile influencers who were deplatformed on Twitter—Alex Jones, Milo Yiannopoulos, and Owen Benjamin. Working with over 49M tweets, we found that deplatforming significantly reduced the number of conversations about all three individuals on Twitter. Further, analyzing the Twitter-wide activity of these influencers’ supporters, we show that the overall activity and toxicity levels of supporters declined after deplatforming. We contribute a methodological framework to systematically examine the effectiveness of moderation interventions and discuss broader implications of using deplatforming as a moderation strategy.

受賞
Honorable Mention
著者
Shagun Jhaver
Rutgers University, New Brunswick, New Jersey, United States
Christian Boylston
Georgia Institute of Technology, Atlanta, Georgia, United States
Diyi Yang
Georgia Institute of Technology, Atlanta, Georgia, United States
Amy Bruckman
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3479525

動画
A Human-Centered Systematic Literature Review of Cyberbullying Detection Algorithms
要旨

Cyberbullying is a growing problem across social media platforms, inflicting short and long-lasting effects on victims. As such, research has looked into building automated systems, powered by machine learning, to detect cyberbullying incidents, or the involved actors like victims and perpetrators. In the past, systematic reviews have examined the approaches within this growing body of work, but with a focus on the computational aspects of the technical innovation, feature engineering, or performance optimization, without centering around humans’ roles, beliefs, desires, or expectations. In this paper, we present a human-centered systematic literature review of the past 10 years of research on automated cyberbullying detection. We analyzed 56 papers based on a three-prong human-centeredness algorithm design framework – spanning theoretical, participatory, and speculative design. We found that the past literature fell short of incorporating human-centeredness across multiple aspects, ranging from defining cyberbullying, establishing the ground truth in data annotation, evaluating the performance of the detection models, to speculating the usage and users of the models, including potential harms and negative consequences. Given the sensitivities of the cyberbullying experience and the deep ramifications cyberbullying incidents bear on the involved actors, we discuss takeaways on how incorporating human-centeredness in future research can aid with developing detection systems that are more practical, useful, and tuned to the diverse needs and contexts of the stakeholders.

著者
Seunghyun Kim
Georgia Institute of Technology, Atlanta, Georgia, United States
Afsaneh Razi
University of Central Florida, Orlando, Florida, United States
Gianluca Stringhini
Boston University, Boston, Massachusetts, United States
Pamela J.. Wisniewski
University of Central Florida, Orlando, Florida, United States
Munmun De Choudhury
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3476066

動画