この勉強会は終了しました。ご参加ありがとうございました。
Content creators---social media personalities with large audiences on platforms like Instagram, TikTok, and YouTube---face a heightened risk of online hate and harassment. We surveyed 135 creators to understand their personal experiences with attacks (including toxic comments, impersonation, stalking, and more), the coping practices they employ, and gaps they experience with existing solutions (such as moderation or reporting). We find that while a majority of creators view audience interactions favorably, nearly every creator could recall at least one incident of hate and harassment, and attacks are a regular occurrence for one in three creators. As a result of hate and harassment, creators report self-censoring their content and leaving platforms. Through their personal stories, their attitudes towards platform-provided tools, and their strategies for coping with attacks and harms, we inform the broader design space for how to better protect people online from hate and harassment.
We know surprisingly little about the prevalence and severity of cybercrime in the U.S. Yet, in order to prioritize the development and distribution of advice and technology to protect end users, we require empirical evidence regarding cybercrime. Measuring crime, including cybercrime, is a challenging problem that relies on a combination of direct crime reports to the government -- which have known issues of under-reporting -- and assessment via carefully-designed self-report surveys. We report on the first large-scale, nationally representative academic survey (n=11,953) of consumer cybercrime experiences in the U.S. Our analysis answers four research questions: (1) What is the prevalence and (2) the monetary impact of these cybercrimes we measure in the U.S.?, (3) Do inequities exist in victimization?, and (4) Can we improve cybercrime measurement by leveraging social-reporting techniques used to measure physical crime? Our analysis also offers insight toward improving future measurement of cybercrime and protecting users.
Survivors of intimate partner violence (IPV) face complex threats to their digital privacy and security. Prior work has established protocols for directly helping them mitigate these harms; however, there remains a need for flexible and pluralistic systems that can support survivors' long-term needs. This paper describes the design and development of sociotechnical infrastructure that incorporates feminist notions of care to connect IPV survivors experiencing technology abuse with volunteer computer security consultants. We present findings from a mixed methods study that draws on data from an 8-month, real-world deployment, as well as interviews with 7 volunteer technology consultants and 18 IPV professionals. Our findings illuminate emergent challenges in safely and adaptively providing computer security advice as care. We discuss implications of these findings for feminist approaches to computer security and privacy, and provide broader lessons for interventions that aim to directly assist at-risk and marginalized people experiencing digital insecurity.
The possibility that common users are successfully recruited in cyberattacks represents a considerable vulnerability because it implies that citizens can legitimize cyberattacks instead of condemning them. We propose to adopt an argumentative approach to identify which premises allow such legitimization. To showcase this approach, we created four short narratives describing cyberattacks involving generic users and covering different motives for the attacks: profit, recreation, revenge, and ideology. A sample of 16 participants read the four narratives and was afterward interviewed to express their position on the attacks described. All interview transcripts were then analyzed with an argumentative approach, and 15 premises were found to account for the different positions taken. We describe the premises, their distribution across the four narratives, and discuss the implications of this approach for cybersecurity.