Privacy and Deepfake

会議の名前
CHI 2024
Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries
要旨

Deepfake technologies have become ubiquitous, ``democratizing'' the ability to manipulate photos and videos. One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet. Drawing on a survey of over 16,000 respondents in 10 different countries, this article examines attitudes and behaviors related to ``deepfake pornography'' as a specific form of non-consensual synthetic intimate imagery (NSII). Our study found that deepfake pornography behaviors were considered harmful by respondents, despite nascent societal awareness. Regarding the prevalence of deepfake pornography victimization and perpetration, 2.2% of all respondents indicated personal victimization, and 1.8% all of respondents indicated perpetration behaviors. Respondents from countries with specific legislation still reported perpetration and victimization experiences, suggesting NSII laws are inadequate to deter perpetration. Approaches to prevent and reduce harms may include digital literacy education, as well as enforced platform policies, practices, and tools which better detect, prevent, and respond to NSII content.

著者
Rebecca Umbach
Google, San Francisco, California, United States
Nicola Henry
Royal Melbourne Institute of Technology, Melbourne, Australia
Gemma Faye. Beard
Royal Melbourne Institute of Technology, Melbourne , Australia
Colleen M.. Berryessa
Rutgers University, Newark, New Jersey, United States
論文URL

https://doi.org/10.1145/3613904.3642382

動画
It's Trying Too Hard To Look Real: Deepfake Moderation Mistakes and Identity-Based Bias
要旨

Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity groups; however, no work has yet systematically investigated such mistakes and biases. We conducted a user study (n=695) that investigates how 1) the identity of the profile, 2) whether the moderator shares that identity, and 3) components of a profile shown affect the perceived artificiality of the profile. We find statistically significant biases in people's moderation of LinkedIn profiles based on all three factors. Further, upon examining how moderators make decisions, we find they rely on mental models of AI and attackers, as well as typicality expectations (how they think the world works). The latter includes reliance on race/gender stereotypes. Based on our findings, we synthesize recommendations for the design of moderation interfaces, moderation teams, and security training.

著者
Jaron Mink
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Miranda Wei
University of Washington, Seattle, Washington, United States
Collins W.. Munyendo
The George Washington University, Washington, District of Columbia, United States
Kurt Hugenberg
Indiana University, Bloomington, Indiana, United States
Tadayoshi Kohno
University of Washington, Seattle, Washington, United States
Elissa M.. Redmiles
Georgetown University, Washington, District of Columbia, United States
Gang Wang
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
論文URL

https://doi.org/10.1145/3613904.3641999

動画
Examining Human Perception of Generative Content Replacement in Image Privacy Protection
要旨

The richness of the information in photos can often threaten privacy, thus image editing methods are often employed for privacy protection. Existing image privacy protection techniques, like blurring, often struggle to maintain the balance between robust privacy protection and preserving image usability. To address this, we introduce a generative content replacement (GCR) method in image privacy protection, which seamlessly substitutes privacy-threatening contents with similar and realistic substitutes, using state-of-the-art generative techniques. Compared with four prevalent image protection methods, GCR consistently exhibited low detectability, making the detection of edits remarkably challenging. GCR also performed reasonably well in hindering the identification of specific content and managed to sustain the image's narrative and visual harmony. This research serves as a pilot study and encourages further innovation on GCR and the development of tools that enable human-in-the-loop image privacy protection using approaches similar to GCR.

著者
Anran Xu
the University of Tokyo, Tokyo, Japan
Shitao Fang
The University of Tokyo, Tokyo, Japan
Huan Yang
Microsoft Research, Beijing, China
Simo Hosio
University of Oulu, Oulu, Oulu, Finland
Koji Yatani
University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3613904.3642103

動画
Dungeons & Deepfakes: Using scenario-based role-play to study journalists' behavior towards using AI-based verification tools for video content
要旨

The evolving landscape of manipulated media, including the threat of deepfakes, has made information verification a daunting challenge for journalists. Technologists have developed tools to detect deepfakes, but these tools can sometimes yield inaccurate results, raising concerns about inadvertently disseminating manipulated content as authentic news. This study examines the impact of unreliable deepfake detection tools on information verification. We conducted role-playing exercises with 24 US journalists, immersing them in complex breaking-news scenarios where determining authenticity was challenging. Through these exercises, we explored questions regarding journalists' investigative processes, use of a deepfake detection tool, and decisions on when and what to publish. Our findings reveal that journalists are diligent in verifying information, but sometimes rely too heavily on results from deepfake detection tools. We argue for more cautious release of such tools, accompanied by proper training for users to mitigate the risk of unintentionally propagating manipulated content as real news.

著者
Saniat Sohrawardi
Rochester Institute of Technology, Rochester, New York, United States
Matthew Wright
Rochester Institute of Technology, Rochester, New York, United States
Yijing Kelly Wu
Rochester Institute of Technology, Rochester, New York, United States
Andrea Hickerson
The University of Mississippi, Oxford, Mississippi, United States
論文URL

https://doi.org/10.1145/3613904.3641973

動画
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks
要旨

Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.

受賞
Best Paper
著者
Hao-Ping (Hank) Lee
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yu-Ju Yang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Thomas Serban von Davier
University of Oxford, Oxford, United Kingdom
Jodi Forlizzi
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sauvik Das
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3613904.3642116

動画