Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks

要旨

Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.

受賞
Best Paper
著者
Hao-Ping (Hank) Lee
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yu-Ju Yang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Thomas Serban von Davier
University of Oxford, Oxford, United Kingdom
Jodi Forlizzi
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sauvik Das
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642116

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Privacy and Deepfake

313C
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00