With generative AI enabling easier production of sexually abusive content, deepfake sexual abuse has intensified, making anyone with visual data be a potential victim or perpetrator. Current moderation systems for non-consensual intimate imagery (NCII) are platform-centric, reactive, and poorly aligned with the workflows of real-time monitors and survivor supporters. To address this gap, we held participatory design workshops with 10 activists affiliated with victim advocacy and survivors experienced in combating deepfake sexual abuse in South Korea. Their insights revealed distinctive challenges, including ambiguity in content classification, barriers to evidence collection, and increased workloads and safety risks during monitoring. Participants suggested features for proactive protection, long-term case tracking, and cross-platform coordination, while emphasizing the need for conversations about data ownership and platform accountability. Based on these findings, we discuss design implications for system and policy that foster multi-stakeholder collaboration to prevent harm, strengthen cross-platform response, and reduce secondary trauma for activists.
ACM CHI Conference on Human Factors in Computing Systems