"I know even if you don't tell me": Understanding Users' Privacy Preferences Regarding AI-based Inferences of Sensitive Information for Personalization

要旨

Personalization improves user experience by tailoring interactions relevant to each user's background and preferences. However, personalization requires information about users that platforms often collect without their awareness or their enthusiastic consent. Here, we study how the transparency of AI inferences on users' personal data affects their privacy decisions and sentiments when sharing data for personalization. We conducted two experiments where participants (N=877) answered questions about themselves for personalized public arts recommendations. Participants indicated their consent to let the system use their inferred data and explicitly provided data after awareness of inferences. Our results show that participants chose restrictive consent decisions for sensitive and incorrect inferences about them and for their answers that led to such inferences. Our findings expand existing privacy discourse to inferences and inform future directions for shaping existing consent mechanisms in light of increasingly pervasive AI inferences.

著者
Sumit Asthana
University of Michigan, Ann Abror, Michigan, United States
Jane Im
University of Michigan, Ann Arbor, Michigan, United States
Zhe Chen
University of Michigan, Ann Arbor, Michigan, United States
Nikola Banovic
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3613904.3642180

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Privacy for Immersive Tracking

314
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00