Deepfake, Real Harm: A Participatory Approach for Imagining Infrastructures to Combat Deepfake Sexual Abuse

要旨

With generative AI enabling easier production of sexually abusive content, deepfake sexual abuse has intensified, making anyone with visual data be a potential victim or perpetrator. Current moderation systems for non-consensual intimate imagery (NCII) are platform-centric, reactive, and poorly aligned with the workflows of real-time monitors and survivor supporters. To address this gap, we held participatory design workshops with 10 activists affiliated with victim advocacy and survivors experienced in combating deepfake sexual abuse in South Korea. Their insights revealed distinctive challenges, including ambiguity in content classification, barriers to evidence collection, and increased workloads and safety risks during monitoring. Participants suggested features for proactive protection, long-term case tracking, and cross-platform coordination, while emphasizing the need for conversations about data ownership and platform accountability. Based on these findings, we discuss design implications for system and policy that foster multi-stakeholder collaboration to prevent harm, strengthen cross-platform response, and reduce secondary trauma for activists.

著者
Saetbyeol LeeYouk
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Joseph Seering
KAIST, Daejeon, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Social Media Discourse and Online Harms

P1 - Room 119
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00