RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization

要旨

With the widespread use of toxic language online, platforms are increasingly using automated systems that leverage advances in natural language processing to automatically flag and remove toxic comments. However, most automated systems---while detecting and moderating toxic language---do not provide feedback to their users, let alone provide an avenue of recourse for users to make actionable changes. We present our work, RECAST, an interactive, open-sourced web tool for visualizing these models' toxic predictions, while providing alternative suggestions for flagged toxic language and a new path of recourse for users. RECAST highlights text responsible for classifying toxicity, and allows users to interactively substitute potentially toxic phrases with neutral alternatives. We examined the effect of RECAST via two large-scale user evaluations, and find that RECAST was highly effective at helping users reduce toxicity as detected through the model, and users gain a stronger understanding of the underlying toxicity criterion used by black-box models, enabling transparency and recourse. In addition we found that when users focus on optimizing language for these models instead of their own judgement (which is the implied incentive and goal of deploying such models at all) these models cease to be effective classifiers of toxicity compared to human annotations. This opens a discussion for how toxicity detection models work and should work, and their effect on future discourse.

著者
Austin P. Wright
Georgia Institute of Technology , Atlanta , Georgia, United States
Omar Shaikh
Georgia Institute of Technology, Atlanta, Georgia, United States
Haekyu Park
Georgia Institute of Technology, Atlanta, Georgia, United States
Will Epperson
Georgia Institute of Technology, Atlanta, Georgia, United States
Muhammed Ahmed
Mailchimp, Atlanta, Georgia, United States
Stephane Pinel
Mailchimp, Atlanta, Georgia, United States
Duen Horng Chau
Georgia Tech, Atlanta, Georgia, United States
Diyi Yang
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3449280

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Antisocial Computing

Papers Room A
8 件の発表
2021-10-26 19:00:00
2021-10-26 20:30:00