It's Trying Too Hard To Look Real: Deepfake Moderation Mistakes and Identity-Based Bias

要旨

Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity groups; however, no work has yet systematically investigated such mistakes and biases. We conducted a user study (n=695) that investigates how 1) the identity of the profile, 2) whether the moderator shares that identity, and 3) components of a profile shown affect the perceived artificiality of the profile. We find statistically significant biases in people's moderation of LinkedIn profiles based on all three factors. Further, upon examining how moderators make decisions, we find they rely on mental models of AI and attackers, as well as typicality expectations (how they think the world works). The latter includes reliance on race/gender stereotypes. Based on our findings, we synthesize recommendations for the design of moderation interfaces, moderation teams, and security training.

著者
Jaron Mink
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Miranda Wei
University of Washington, Seattle, Washington, United States
Collins W.. Munyendo
The George Washington University, Washington, District of Columbia, United States
Kurt Hugenberg
Indiana University, Bloomington, Indiana, United States
Tadayoshi Kohno
University of Washington, Seattle, Washington, United States
Elissa M.. Redmiles
Georgetown University, Washington, District of Columbia, United States
Gang Wang
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
論文URL

https://doi.org/10.1145/3613904.3641999

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Privacy and Deepfake

313C
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00