Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

要旨

Online community moderators often rely on social signals like whether or not a user has an account or a profile page as clues that users are likely to cause problems. Reliance on these clues may lead to ``over-profiling'' bias when moderators focus on these signals but overlook misbehavior by others. We propose that algorithmic flagging systems deployed to improve efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by a system called RCFilters that displays social signals and algorithmic flags and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially edits by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness but that the relationship is complex and contingent.

著者
Nathan TeBlunthuis
University of Washington, Seattle, Washington, United States
Benjamin Mako Hill
University of Washington, Seattle, Washington, United States
Aaron Halfaker
Microsoft, Redmond, Washington, United States
論文URL

https://doi.org/10.1145/3449130

動画

会議: CSCW2021

The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

セッション: Specialist and Collaborative Work // Algorithmic Fairness

Papers Room C
8 件の発表
2021-10-25 21:00:00
2021-10-25 22:30:00