Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries

要旨

Deepfake technologies have become ubiquitous, ``democratizing'' the ability to manipulate photos and videos. One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet. Drawing on a survey of over 16,000 respondents in 10 different countries, this article examines attitudes and behaviors related to ``deepfake pornography'' as a specific form of non-consensual synthetic intimate imagery (NSII). Our study found that deepfake pornography behaviors were considered harmful by respondents, despite nascent societal awareness. Regarding the prevalence of deepfake pornography victimization and perpetration, 2.2% of all respondents indicated personal victimization, and 1.8% all of respondents indicated perpetration behaviors. Respondents from countries with specific legislation still reported perpetration and victimization experiences, suggesting NSII laws are inadequate to deter perpetration. Approaches to prevent and reduce harms may include digital literacy education, as well as enforced platform policies, practices, and tools which better detect, prevent, and respond to NSII content.

著者
Rebecca Umbach
Google, San Francisco, California, United States
Nicola Henry
Royal Melbourne Institute of Technology, Melbourne, Australia
Gemma Faye. Beard
Royal Melbourne Institute of Technology, Melbourne , Australia
Colleen M.. Berryessa
Rutgers University, Newark, New Jersey, United States
論文URL

doi.org/10.1145/3613904.3642382

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Privacy and Deepfake

313C
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00